2025-09-13 00:00:08.384308 | Job console starting 2025-09-13 00:00:08.396142 | Updating git repos 2025-09-13 00:00:08.645343 | Cloning repos into workspace 2025-09-13 00:00:08.860860 | Restoring repo states 2025-09-13 00:00:08.880764 | Merging changes 2025-09-13 00:00:08.880782 | Checking out repos 2025-09-13 00:00:09.324115 | Preparing playbooks 2025-09-13 00:00:10.103429 | Running Ansible setup 2025-09-13 00:00:16.857626 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-09-13 00:00:18.993434 | 2025-09-13 00:00:18.993567 | PLAY [Base pre] 2025-09-13 00:00:19.037478 | 2025-09-13 00:00:19.037600 | TASK [Setup log path fact] 2025-09-13 00:00:19.096308 | orchestrator | ok 2025-09-13 00:00:19.133527 | 2025-09-13 00:00:19.133667 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-13 00:00:19.162525 | orchestrator | ok 2025-09-13 00:00:19.204436 | 2025-09-13 00:00:19.204560 | TASK [emit-job-header : Print job information] 2025-09-13 00:00:19.298751 | # Job Information 2025-09-13 00:00:19.299025 | Ansible Version: 2.16.14 2025-09-13 00:00:19.299066 | Job: testbed-deploy-in-a-nutshell-with-tempest-ubuntu-24.04 2025-09-13 00:00:19.299100 | Pipeline: periodic-midnight 2025-09-13 00:00:19.299148 | Executor: 521e9411259a 2025-09-13 00:00:19.299172 | Triggered by: https://github.com/osism/testbed 2025-09-13 00:00:19.299193 | Event ID: 4159687b04be44d18585573c2a95d05b 2025-09-13 00:00:19.312057 | 2025-09-13 00:00:19.312200 | LOOP [emit-job-header : Print node information] 2025-09-13 00:00:19.693940 | orchestrator | ok: 2025-09-13 00:00:19.694073 | orchestrator | # Node Information 2025-09-13 00:00:19.694101 | orchestrator | Inventory Hostname: orchestrator 2025-09-13 00:00:19.694132 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-09-13 00:00:19.694151 | orchestrator | Username: zuul-testbed03 2025-09-13 00:00:19.694168 | orchestrator | Distro: Debian 12.12 2025-09-13 00:00:19.694187 | orchestrator | Provider: static-testbed 2025-09-13 00:00:19.694204 | orchestrator | Region: 2025-09-13 00:00:19.694221 | orchestrator | Label: testbed-orchestrator 2025-09-13 00:00:19.694237 | orchestrator | Product Name: OpenStack Nova 2025-09-13 00:00:19.694253 | orchestrator | Interface IP: 81.163.193.140 2025-09-13 00:00:19.708887 | 2025-09-13 00:00:19.708999 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-09-13 00:00:21.077584 | orchestrator -> localhost | changed 2025-09-13 00:00:21.085931 | 2025-09-13 00:00:21.086069 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-09-13 00:00:23.135221 | orchestrator -> localhost | changed 2025-09-13 00:00:23.152966 | 2025-09-13 00:00:23.153062 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-09-13 00:00:23.800695 | orchestrator -> localhost | ok 2025-09-13 00:00:23.806351 | 2025-09-13 00:00:23.806440 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-09-13 00:00:23.829870 | orchestrator | ok 2025-09-13 00:00:23.887436 | orchestrator | included: /var/lib/zuul/builds/6adfe52b30654ba48ae13a9ef77a3415/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-09-13 00:00:23.918357 | 2025-09-13 00:00:23.918454 | TASK [add-build-sshkey : Create Temp SSH key] 2025-09-13 00:00:27.830846 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-09-13 00:00:27.831000 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/6adfe52b30654ba48ae13a9ef77a3415/work/6adfe52b30654ba48ae13a9ef77a3415_id_rsa 2025-09-13 00:00:27.831032 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/6adfe52b30654ba48ae13a9ef77a3415/work/6adfe52b30654ba48ae13a9ef77a3415_id_rsa.pub 2025-09-13 00:00:27.831054 | orchestrator -> localhost | The key fingerprint is: 2025-09-13 00:00:27.831077 | orchestrator -> localhost | SHA256:MY8csqjuvSTtxc8vXYjCW3IOI3PZYKuXLhD6mRF8it8 zuul-build-sshkey 2025-09-13 00:00:27.831095 | orchestrator -> localhost | The key's randomart image is: 2025-09-13 00:00:27.831135 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-09-13 00:00:27.831155 | orchestrator -> localhost | | | 2025-09-13 00:00:27.831173 | orchestrator -> localhost | | | 2025-09-13 00:00:27.831190 | orchestrator -> localhost | | . . + | 2025-09-13 00:00:27.831206 | orchestrator -> localhost | | + ..o+ * | 2025-09-13 00:00:27.831223 | orchestrator -> localhost | | o =.o.=S... | 2025-09-13 00:00:27.831242 | orchestrator -> localhost | |o +oo.X = . . | 2025-09-13 00:00:27.831259 | orchestrator -> localhost | | oo*o=o@ . . | 2025-09-13 00:00:27.831275 | orchestrator -> localhost | | .==E.+oo . | 2025-09-13 00:00:27.831292 | orchestrator -> localhost | | .o +=. oo. | 2025-09-13 00:00:27.831308 | orchestrator -> localhost | +----[SHA256]-----+ 2025-09-13 00:00:27.831348 | orchestrator -> localhost | ok: Runtime: 0:00:02.625844 2025-09-13 00:00:27.837224 | 2025-09-13 00:00:27.837302 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-09-13 00:00:27.894679 | orchestrator | ok 2025-09-13 00:00:27.903557 | orchestrator | included: /var/lib/zuul/builds/6adfe52b30654ba48ae13a9ef77a3415/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-09-13 00:00:27.933029 | 2025-09-13 00:00:27.933118 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-09-13 00:00:27.974779 | orchestrator | skipping: Conditional result was False 2025-09-13 00:00:27.982144 | 2025-09-13 00:00:27.982242 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-09-13 00:00:28.822306 | orchestrator | changed 2025-09-13 00:00:28.827492 | 2025-09-13 00:00:28.827571 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-09-13 00:00:29.141292 | orchestrator | ok 2025-09-13 00:00:29.146439 | 2025-09-13 00:00:29.146521 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-09-13 00:00:29.603175 | orchestrator | ok 2025-09-13 00:00:29.612097 | 2025-09-13 00:00:29.612200 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-09-13 00:00:30.086568 | orchestrator | ok 2025-09-13 00:00:30.091452 | 2025-09-13 00:00:30.091534 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-09-13 00:00:30.137618 | orchestrator | skipping: Conditional result was False 2025-09-13 00:00:30.143152 | 2025-09-13 00:00:30.143272 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-09-13 00:00:30.984368 | orchestrator -> localhost | changed 2025-09-13 00:00:31.001719 | 2025-09-13 00:00:31.001816 | TASK [add-build-sshkey : Add back temp key] 2025-09-13 00:00:31.854600 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/6adfe52b30654ba48ae13a9ef77a3415/work/6adfe52b30654ba48ae13a9ef77a3415_id_rsa (zuul-build-sshkey) 2025-09-13 00:00:31.854783 | orchestrator -> localhost | ok: Runtime: 0:00:00.046828 2025-09-13 00:00:31.860669 | 2025-09-13 00:00:31.860749 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-09-13 00:00:32.533915 | orchestrator | ok 2025-09-13 00:00:32.543676 | 2025-09-13 00:00:32.543759 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-09-13 00:00:32.580955 | orchestrator | skipping: Conditional result was False 2025-09-13 00:00:32.684401 | 2025-09-13 00:00:32.684502 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-09-13 00:00:33.275887 | orchestrator | ok 2025-09-13 00:00:33.291959 | 2025-09-13 00:00:33.292048 | TASK [validate-host : Define zuul_info_dir fact] 2025-09-13 00:00:33.335418 | orchestrator | ok 2025-09-13 00:00:33.348411 | 2025-09-13 00:00:33.348507 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-09-13 00:00:33.882492 | orchestrator -> localhost | ok 2025-09-13 00:00:33.888412 | 2025-09-13 00:00:33.888488 | TASK [validate-host : Collect information about the host] 2025-09-13 00:00:35.728596 | orchestrator | ok 2025-09-13 00:00:35.760831 | 2025-09-13 00:00:35.760928 | TASK [validate-host : Sanitize hostname] 2025-09-13 00:00:35.842232 | orchestrator | ok 2025-09-13 00:00:35.850737 | 2025-09-13 00:00:35.850819 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-09-13 00:00:37.091380 | orchestrator -> localhost | changed 2025-09-13 00:00:37.096361 | 2025-09-13 00:00:37.096443 | TASK [validate-host : Collect information about zuul worker] 2025-09-13 00:00:37.568651 | orchestrator | ok 2025-09-13 00:00:37.573101 | 2025-09-13 00:00:37.573208 | TASK [validate-host : Write out all zuul information for each host] 2025-09-13 00:00:38.487514 | orchestrator -> localhost | changed 2025-09-13 00:00:38.495820 | 2025-09-13 00:00:38.495902 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-09-13 00:00:38.761763 | orchestrator | ok 2025-09-13 00:00:38.767641 | 2025-09-13 00:00:38.767735 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-09-13 00:01:16.909955 | orchestrator | changed: 2025-09-13 00:01:16.910198 | orchestrator | .d..t...... src/ 2025-09-13 00:01:16.910236 | orchestrator | .d..t...... src/github.com/ 2025-09-13 00:01:16.910260 | orchestrator | .d..t...... src/github.com/osism/ 2025-09-13 00:01:16.910281 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-09-13 00:01:16.910302 | orchestrator | RedHat.yml 2025-09-13 00:01:16.969242 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-09-13 00:01:16.969261 | orchestrator | RedHat.yml 2025-09-13 00:01:16.969315 | orchestrator | = 1.53.0"... 2025-09-13 00:01:27.556659 | orchestrator | 00:01:27.556 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-09-13 00:01:27.729846 | orchestrator | 00:01:27.729 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-09-13 00:01:28.280732 | orchestrator | 00:01:28.280 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-09-13 00:01:28.355487 | orchestrator | 00:01:28.355 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-09-13 00:01:28.955153 | orchestrator | 00:01:28.954 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-09-13 00:01:29.043299 | orchestrator | 00:01:29.043 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-09-13 00:01:29.583130 | orchestrator | 00:01:29.582 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-09-13 00:01:29.583202 | orchestrator | 00:01:29.583 STDOUT terraform: Providers are signed by their developers. 2025-09-13 00:01:29.583211 | orchestrator | 00:01:29.583 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-09-13 00:01:29.583359 | orchestrator | 00:01:29.583 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-09-13 00:01:29.583368 | orchestrator | 00:01:29.583 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-09-13 00:01:29.583493 | orchestrator | 00:01:29.583 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-09-13 00:01:29.583545 | orchestrator | 00:01:29.583 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-09-13 00:01:29.583578 | orchestrator | 00:01:29.583 STDOUT terraform: you run "tofu init" in the future. 2025-09-13 00:01:29.583643 | orchestrator | 00:01:29.583 STDOUT terraform: OpenTofu has been successfully initialized! 2025-09-13 00:01:29.583768 | orchestrator | 00:01:29.583 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-09-13 00:01:29.583858 | orchestrator | 00:01:29.583 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-09-13 00:01:29.583888 | orchestrator | 00:01:29.583 STDOUT terraform: should now work. 2025-09-13 00:01:29.583969 | orchestrator | 00:01:29.583 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-09-13 00:01:29.584047 | orchestrator | 00:01:29.583 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-09-13 00:01:29.584124 | orchestrator | 00:01:29.584 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-09-13 00:01:29.682048 | orchestrator | 00:01:29.680 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-09-13 00:01:29.682117 | orchestrator | 00:01:29.680 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-09-13 00:01:29.868138 | orchestrator | 00:01:29.867 STDOUT terraform: Created and switched to workspace "ci"! 2025-09-13 00:01:29.868195 | orchestrator | 00:01:29.868 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-09-13 00:01:29.868204 | orchestrator | 00:01:29.868 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-09-13 00:01:29.868209 | orchestrator | 00:01:29.868 STDOUT terraform: for this configuration. 2025-09-13 00:01:30.006965 | orchestrator | 00:01:30.006 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-09-13 00:01:30.007030 | orchestrator | 00:01:30.006 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-09-13 00:01:30.120978 | orchestrator | 00:01:30.120 STDOUT terraform: ci.auto.tfvars 2025-09-13 00:01:30.128995 | orchestrator | 00:01:30.128 STDOUT terraform: default_custom.tf 2025-09-13 00:01:30.243760 | orchestrator | 00:01:30.243 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-09-13 00:01:31.160965 | orchestrator | 00:01:31.160 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-09-13 00:01:31.680467 | orchestrator | 00:01:31.680 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-09-13 00:01:31.918572 | orchestrator | 00:01:31.918 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-09-13 00:01:31.918645 | orchestrator | 00:01:31.918 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-09-13 00:01:31.918652 | orchestrator | 00:01:31.918 STDOUT terraform:  + create 2025-09-13 00:01:31.918658 | orchestrator | 00:01:31.918 STDOUT terraform:  <= read (data resources) 2025-09-13 00:01:31.918665 | orchestrator | 00:01:31.918 STDOUT terraform: OpenTofu will perform the following actions: 2025-09-13 00:01:31.918764 | orchestrator | 00:01:31.918 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-09-13 00:01:31.918794 | orchestrator | 00:01:31.918 STDOUT terraform:  # (config refers to values not yet known) 2025-09-13 00:01:31.918828 | orchestrator | 00:01:31.918 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-09-13 00:01:31.918858 | orchestrator | 00:01:31.918 STDOUT terraform:  + checksum = (known after apply) 2025-09-13 00:01:31.918888 | orchestrator | 00:01:31.918 STDOUT terraform:  + created_at = (known after apply) 2025-09-13 00:01:31.918917 | orchestrator | 00:01:31.918 STDOUT terraform:  + file = (known after apply) 2025-09-13 00:01:31.918956 | orchestrator | 00:01:31.918 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.918983 | orchestrator | 00:01:31.918 STDOUT terraform:  + metadata = (known after apply) 2025-09-13 00:01:31.919003 | orchestrator | 00:01:31.918 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-13 00:01:31.919034 | orchestrator | 00:01:31.919 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-13 00:01:31.919055 | orchestrator | 00:01:31.919 STDOUT terraform:  + most_recent = true 2025-09-13 00:01:31.919083 | orchestrator | 00:01:31.919 STDOUT terraform:  + name = (known after apply) 2025-09-13 00:01:31.919112 | orchestrator | 00:01:31.919 STDOUT terraform:  + protected = (known after apply) 2025-09-13 00:01:31.919139 | orchestrator | 00:01:31.919 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.919167 | orchestrator | 00:01:31.919 STDOUT terraform:  + schema = (known after apply) 2025-09-13 00:01:31.919195 | orchestrator | 00:01:31.919 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-13 00:01:31.919223 | orchestrator | 00:01:31.919 STDOUT terraform:  + tags = (known after apply) 2025-09-13 00:01:31.919250 | orchestrator | 00:01:31.919 STDOUT terraform:  + updated_at = (known after apply) 2025-09-13 00:01:31.919257 | orchestrator | 00:01:31.919 STDOUT terraform:  } 2025-09-13 00:01:31.919308 | orchestrator | 00:01:31.919 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-09-13 00:01:31.919334 | orchestrator | 00:01:31.919 STDOUT terraform:  # (config refers to values not yet known) 2025-09-13 00:01:31.919378 | orchestrator | 00:01:31.919 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-09-13 00:01:31.919407 | orchestrator | 00:01:31.919 STDOUT terraform:  + checksum = (known after apply) 2025-09-13 00:01:31.919434 | orchestrator | 00:01:31.919 STDOUT terraform:  + created_at = (known after apply) 2025-09-13 00:01:31.919461 | orchestrator | 00:01:31.919 STDOUT terraform:  + file = (known after apply) 2025-09-13 00:01:31.919491 | orchestrator | 00:01:31.919 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.919519 | orchestrator | 00:01:31.919 STDOUT terraform:  + metadata = (known after apply) 2025-09-13 00:01:31.919546 | orchestrator | 00:01:31.919 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-13 00:01:31.919574 | orchestrator | 00:01:31.919 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-13 00:01:31.919599 | orchestrator | 00:01:31.919 STDOUT terraform:  + most_recent = true 2025-09-13 00:01:31.919621 | orchestrator | 00:01:31.919 STDOUT terraform:  + name = (known after apply) 2025-09-13 00:01:31.919649 | orchestrator | 00:01:31.919 STDOUT terraform:  + protected = (known after apply) 2025-09-13 00:01:31.919678 | orchestrator | 00:01:31.919 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.919719 | orchestrator | 00:01:31.919 STDOUT terraform:  + schema = (known after apply) 2025-09-13 00:01:31.919746 | orchestrator | 00:01:31.919 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-13 00:01:31.919774 | orchestrator | 00:01:31.919 STDOUT terraform:  + tags = (known after apply) 2025-09-13 00:01:31.919803 | orchestrator | 00:01:31.919 STDOUT terraform:  + updated_at = (known after apply) 2025-09-13 00:01:31.919810 | orchestrator | 00:01:31.919 STDOUT terraform:  } 2025-09-13 00:01:31.919858 | orchestrator | 00:01:31.919 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-09-13 00:01:31.919888 | orchestrator | 00:01:31.919 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-09-13 00:01:31.919923 | orchestrator | 00:01:31.919 STDOUT terraform:  + content = (known after apply) 2025-09-13 00:01:31.919960 | orchestrator | 00:01:31.919 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-13 00:01:31.919996 | orchestrator | 00:01:31.919 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-13 00:01:31.920032 | orchestrator | 00:01:31.919 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-13 00:01:31.920068 | orchestrator | 00:01:31.920 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-13 00:01:31.920105 | orchestrator | 00:01:31.920 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-13 00:01:31.920141 | orchestrator | 00:01:31.920 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-13 00:01:31.920166 | orchestrator | 00:01:31.920 STDOUT terraform:  + directory_permission = "0777" 2025-09-13 00:01:31.920190 | orchestrator | 00:01:31.920 STDOUT terraform:  + file_permission = "0644" 2025-09-13 00:01:31.920226 | orchestrator | 00:01:31.920 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-09-13 00:01:31.920264 | orchestrator | 00:01:31.920 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.920284 | orchestrator | 00:01:31.920 STDOUT terraform:  } 2025-09-13 00:01:31.920315 | orchestrator | 00:01:31.920 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-09-13 00:01:31.920339 | orchestrator | 00:01:31.920 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-09-13 00:01:31.920382 | orchestrator | 00:01:31.920 STDOUT terraform:  + content = (known after apply) 2025-09-13 00:01:31.920418 | orchestrator | 00:01:31.920 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-13 00:01:31.920452 | orchestrator | 00:01:31.920 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-13 00:01:31.920487 | orchestrator | 00:01:31.920 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-13 00:01:31.920522 | orchestrator | 00:01:31.920 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-13 00:01:31.920555 | orchestrator | 00:01:31.920 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-13 00:01:31.920589 | orchestrator | 00:01:31.920 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-13 00:01:31.920611 | orchestrator | 00:01:31.920 STDOUT terraform:  + directory_permission = "0777" 2025-09-13 00:01:31.920642 | orchestrator | 00:01:31.920 STDOUT terraform:  + file_permission = "0644" 2025-09-13 00:01:31.920674 | orchestrator | 00:01:31.920 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-09-13 00:01:31.920732 | orchestrator | 00:01:31.920 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.920747 | orchestrator | 00:01:31.920 STDOUT terraform:  } 2025-09-13 00:01:31.920776 | orchestrator | 00:01:31.920 STDOUT terraform:  # local_file.inventory will be created 2025-09-13 00:01:31.920798 | orchestrator | 00:01:31.920 STDOUT terraform:  + resource "local_file" "inventory" { 2025-09-13 00:01:31.920834 | orchestrator | 00:01:31.920 STDOUT terraform:  + content = (known after apply) 2025-09-13 00:01:31.920867 | orchestrator | 00:01:31.920 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-13 00:01:31.920901 | orchestrator | 00:01:31.920 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-13 00:01:31.920936 | orchestrator | 00:01:31.920 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-13 00:01:31.920970 | orchestrator | 00:01:31.920 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-13 00:01:31.921004 | orchestrator | 00:01:31.920 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-13 00:01:31.921039 | orchestrator | 00:01:31.921 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-13 00:01:31.921062 | orchestrator | 00:01:31.921 STDOUT terraform:  + directory_permission = "0777" 2025-09-13 00:01:31.921087 | orchestrator | 00:01:31.921 STDOUT terraform:  + file_permission = "0644" 2025-09-13 00:01:31.921117 | orchestrator | 00:01:31.921 STDOUT terraform:  + filename = "inventory.ci" 2025-09-13 00:01:31.921154 | orchestrator | 00:01:31.921 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.921161 | orchestrator | 00:01:31.921 STDOUT terraform:  } 2025-09-13 00:01:31.921192 | orchestrator | 00:01:31.921 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-09-13 00:01:31.921221 | orchestrator | 00:01:31.921 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-09-13 00:01:31.921252 | orchestrator | 00:01:31.921 STDOUT terraform:  + content = (sensitive value) 2025-09-13 00:01:31.921286 | orchestrator | 00:01:31.921 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-13 00:01:31.921319 | orchestrator | 00:01:31.921 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-13 00:01:31.921353 | orchestrator | 00:01:31.921 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-13 00:01:31.921406 | orchestrator | 00:01:31.921 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-13 00:01:31.921463 | orchestrator | 00:01:31.921 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-13 00:01:31.921499 | orchestrator | 00:01:31.921 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-13 00:01:31.921523 | orchestrator | 00:01:31.921 STDOUT terraform:  + directory_permission = "0700" 2025-09-13 00:01:31.921547 | orchestrator | 00:01:31.921 STDOUT terraform:  + file_permission = "0600" 2025-09-13 00:01:31.921575 | orchestrator | 00:01:31.921 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-09-13 00:01:31.921622 | orchestrator | 00:01:31.921 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.921636 | orchestrator | 00:01:31.921 STDOUT terraform:  } 2025-09-13 00:01:31.921665 | orchestrator | 00:01:31.921 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-09-13 00:01:31.921736 | orchestrator | 00:01:31.921 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-09-13 00:01:31.921799 | orchestrator | 00:01:31.921 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.921810 | orchestrator | 00:01:31.921 STDOUT terraform:  } 2025-09-13 00:01:31.921861 | orchestrator | 00:01:31.921 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-09-13 00:01:31.921908 | orchestrator | 00:01:31.921 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-09-13 00:01:31.921939 | orchestrator | 00:01:31.921 STDOUT terraform:  + attachment = (known after apply) 2025-09-13 00:01:31.921962 | orchestrator | 00:01:31.921 STDOUT terraform:  + availability_zone = "nova" 2025-09-13 00:01:31.921997 | orchestrator | 00:01:31.921 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.922048 | orchestrator | 00:01:31.921 STDOUT terraform:  + image_id = (known after apply) 2025-09-13 00:01:31.922081 | orchestrator | 00:01:31.922 STDOUT terraform:  + metadata = (known after apply) 2025-09-13 00:01:31.922125 | orchestrator | 00:01:31.922 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-09-13 00:01:31.922160 | orchestrator | 00:01:31.922 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.922181 | orchestrator | 00:01:31.922 STDOUT terraform:  + size = 80 2025-09-13 00:01:31.922205 | orchestrator | 00:01:31.922 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-13 00:01:31.922228 | orchestrator | 00:01:31.922 STDOUT terraform:  + volume_type = "ssd" 2025-09-13 00:01:31.922243 | orchestrator | 00:01:31.922 STDOUT terraform:  } 2025-09-13 00:01:31.922290 | orchestrator | 00:01:31.922 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-09-13 00:01:31.922335 | orchestrator | 00:01:31.922 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-13 00:01:31.922371 | orchestrator | 00:01:31.922 STDOUT terraform:  + attachment = (known after apply) 2025-09-13 00:01:31.922396 | orchestrator | 00:01:31.922 STDOUT terraform:  + availability_zone = "nova" 2025-09-13 00:01:31.922433 | orchestrator | 00:01:31.922 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.922468 | orchestrator | 00:01:31.922 STDOUT terraform:  + image_id = (known after apply) 2025-09-13 00:01:31.922502 | orchestrator | 00:01:31.922 STDOUT terraform:  + metadata = (known after apply) 2025-09-13 00:01:31.922546 | orchestrator | 00:01:31.922 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-09-13 00:01:31.922581 | orchestrator | 00:01:31.922 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.922600 | orchestrator | 00:01:31.922 STDOUT terraform:  + size = 80 2025-09-13 00:01:31.922623 | orchestrator | 00:01:31.922 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-13 00:01:31.922648 | orchestrator | 00:01:31.922 STDOUT terraform:  + volume_type = "ssd" 2025-09-13 00:01:31.922654 | orchestrator | 00:01:31.922 STDOUT terraform:  } 2025-09-13 00:01:31.922721 | orchestrator | 00:01:31.922 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-09-13 00:01:31.922767 | orchestrator | 00:01:31.922 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-13 00:01:31.922800 | orchestrator | 00:01:31.922 STDOUT terraform:  + attachment = (known after apply) 2025-09-13 00:01:31.922823 | orchestrator | 00:01:31.922 STDOUT terraform:  + availability_zone = "nova" 2025-09-13 00:01:31.922857 | orchestrator | 00:01:31.922 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.922893 | orchestrator | 00:01:31.922 STDOUT terraform:  + image_id = (known after apply) 2025-09-13 00:01:31.922927 | orchestrator | 00:01:31.922 STDOUT terraform:  + metadata = (known after apply) 2025-09-13 00:01:31.922969 | orchestrator | 00:01:31.922 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-09-13 00:01:31.923003 | orchestrator | 00:01:31.922 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.923024 | orchestrator | 00:01:31.923 STDOUT terraform:  + size = 80 2025-09-13 00:01:31.923048 | orchestrator | 00:01:31.923 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-13 00:01:31.923072 | orchestrator | 00:01:31.923 STDOUT terraform:  + volume_type = "ssd" 2025-09-13 00:01:31.923078 | orchestrator | 00:01:31.923 STDOUT terraform:  } 2025-09-13 00:01:31.923128 | orchestrator | 00:01:31.923 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-09-13 00:01:31.923172 | orchestrator | 00:01:31.923 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-13 00:01:31.923208 | orchestrator | 00:01:31.923 STDOUT terraform:  + attachment = (known after apply) 2025-09-13 00:01:31.923239 | orchestrator | 00:01:31.923 STDOUT terraform:  + availability_zone = "nova" 2025-09-13 00:01:31.923268 | orchestrator | 00:01:31.923 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.923303 | orchestrator | 00:01:31.923 STDOUT terraform:  + image_id = (known after apply) 2025-09-13 00:01:31.923336 | orchestrator | 00:01:31.923 STDOUT terraform:  + metadata = (known after apply) 2025-09-13 00:01:31.923378 | orchestrator | 00:01:31.923 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-09-13 00:01:31.923412 | orchestrator | 00:01:31.923 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.923433 | orchestrator | 00:01:31.923 STDOUT terraform:  + size = 80 2025-09-13 00:01:31.923455 | orchestrator | 00:01:31.923 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-13 00:01:31.923479 | orchestrator | 00:01:31.923 STDOUT terraform:  + volume_type = "ssd" 2025-09-13 00:01:31.923492 | orchestrator | 00:01:31.923 STDOUT terraform:  } 2025-09-13 00:01:31.923537 | orchestrator | 00:01:31.923 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-09-13 00:01:31.923580 | orchestrator | 00:01:31.923 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-13 00:01:31.923614 | orchestrator | 00:01:31.923 STDOUT terraform:  + attachment = (known after apply) 2025-09-13 00:01:31.923637 | orchestrator | 00:01:31.923 STDOUT terraform:  + availability_zone = "nova" 2025-09-13 00:01:31.923673 | orchestrator | 00:01:31.923 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.923725 | orchestrator | 00:01:31.923 STDOUT terraform:  + image_id = (known after apply) 2025-09-13 00:01:31.923750 | orchestrator | 00:01:31.923 STDOUT terraform:  + metadata = (known after apply) 2025-09-13 00:01:31.923793 | orchestrator | 00:01:31.923 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-09-13 00:01:31.923827 | orchestrator | 00:01:31.923 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.923847 | orchestrator | 00:01:31.923 STDOUT terraform:  + size = 80 2025-09-13 00:01:31.923870 | orchestrator | 00:01:31.923 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-13 00:01:31.923893 | orchestrator | 00:01:31.923 STDOUT terraform:  + volume_type = "ssd" 2025-09-13 00:01:31.923899 | orchestrator | 00:01:31.923 STDOUT terraform:  } 2025-09-13 00:01:31.923946 | orchestrator | 00:01:31.923 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-09-13 00:01:31.923993 | orchestrator | 00:01:31.923 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-13 00:01:31.924023 | orchestrator | 00:01:31.923 STDOUT terraform:  + attachment = (known after apply) 2025-09-13 00:01:31.924047 | orchestrator | 00:01:31.924 STDOUT terraform:  + availability_zone = "nova" 2025-09-13 00:01:31.924082 | orchestrator | 00:01:31.924 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.924116 | orchestrator | 00:01:31.924 STDOUT terraform:  + image_id = (known after apply) 2025-09-13 00:01:31.924151 | orchestrator | 00:01:31.924 STDOUT terraform:  + metadata = (known after apply) 2025-09-13 00:01:31.924193 | orchestrator | 00:01:31.924 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-09-13 00:01:31.924227 | orchestrator | 00:01:31.924 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.924248 | orchestrator | 00:01:31.924 STDOUT terraform:  + size = 80 2025-09-13 00:01:31.924271 | orchestrator | 00:01:31.924 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-13 00:01:31.924295 | orchestrator | 00:01:31.924 STDOUT terraform:  + volume_type = "ssd" 2025-09-13 00:01:31.924311 | orchestrator | 00:01:31.924 STDOUT terraform:  } 2025-09-13 00:01:31.924420 | orchestrator | 00:01:31.924 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-09-13 00:01:31.924458 | orchestrator | 00:01:31.924 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-13 00:01:31.924492 | orchestrator | 00:01:31.924 STDOUT terraform:  + attachment = (known after apply) 2025-09-13 00:01:31.924512 | orchestrator | 00:01:31.924 STDOUT terraform:  + availability_zone = "nova" 2025-09-13 00:01:31.924547 | orchestrator | 00:01:31.924 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.924580 | orchestrator | 00:01:31.924 STDOUT terraform:  + image_id = (known after apply) 2025-09-13 00:01:31.924616 | orchestrator | 00:01:31.924 STDOUT terraform:  + metadata = (known after apply) 2025-09-13 00:01:31.924656 | orchestrator | 00:01:31.924 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-09-13 00:01:31.924689 | orchestrator | 00:01:31.924 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.924718 | orchestrator | 00:01:31.924 STDOUT terraform:  + size = 80 2025-09-13 00:01:31.924741 | orchestrator | 00:01:31.924 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-13 00:01:31.924768 | orchestrator | 00:01:31.924 STDOUT terraform:  + volume_type = "ssd" 2025-09-13 00:01:31.924775 | orchestrator | 00:01:31.924 STDOUT terraform:  } 2025-09-13 00:01:31.924818 | orchestrator | 00:01:31.924 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-09-13 00:01:31.924859 | orchestrator | 00:01:31.924 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-13 00:01:31.924897 | orchestrator | 00:01:31.924 STDOUT terraform:  + attachment = (known after apply) 2025-09-13 00:01:31.924920 | orchestrator | 00:01:31.924 STDOUT terraform:  + availability_zone = "nova" 2025-09-13 00:01:31.924954 | orchestrator | 00:01:31.924 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.924991 | orchestrator | 00:01:31.924 STDOUT terraform:  + metadata = (known after apply) 2025-09-13 00:01:31.925029 | orchestrator | 00:01:31.924 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-09-13 00:01:31.925063 | orchestrator | 00:01:31.925 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.925083 | orchestrator | 00:01:31.925 STDOUT terraform:  + size = 20 2025-09-13 00:01:31.925106 | orchestrator | 00:01:31.925 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-13 00:01:31.925129 | orchestrator | 00:01:31.925 STDOUT terraform:  + volume_type = "ssd" 2025-09-13 00:01:31.925136 | orchestrator | 00:01:31.925 STDOUT terraform:  } 2025-09-13 00:01:31.925180 | orchestrator | 00:01:31.925 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-09-13 00:01:31.925222 | orchestrator | 00:01:31.925 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-13 00:01:31.925258 | orchestrator | 00:01:31.925 STDOUT terraform:  + attachment = (known after apply) 2025-09-13 00:01:31.925282 | orchestrator | 00:01:31.925 STDOUT terraform:  + availability_zone = "nova" 2025-09-13 00:01:31.925318 | orchestrator | 00:01:31.925 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.925352 | orchestrator | 00:01:31.925 STDOUT terraform:  + metadata = (known after apply) 2025-09-13 00:01:31.925393 | orchestrator | 00:01:31.925 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-09-13 00:01:31.925427 | orchestrator | 00:01:31.925 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.925447 | orchestrator | 00:01:31.925 STDOUT terraform:  + size = 20 2025-09-13 00:01:31.925470 | orchestrator | 00:01:31.925 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-13 00:01:31.925493 | orchestrator | 00:01:31.925 STDOUT terraform:  + volume_type = "ssd" 2025-09-13 00:01:31.925506 | orchestrator | 00:01:31.925 STDOUT terraform:  } 2025-09-13 00:01:31.925551 | orchestrator | 00:01:31.925 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-09-13 00:01:31.925592 | orchestrator | 00:01:31.925 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-13 00:01:31.925628 | orchestrator | 00:01:31.925 STDOUT terraform:  + attachment = (known after apply) 2025-09-13 00:01:31.925650 | orchestrator | 00:01:31.925 STDOUT terraform:  + availability_zone = "nova" 2025-09-13 00:01:31.925685 | orchestrator | 00:01:31.925 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.925737 | orchestrator | 00:01:31.925 STDOUT terraform:  + metadata = (known after apply) 2025-09-13 00:01:31.925774 | orchestrator | 00:01:31.925 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-09-13 00:01:31.925810 | orchestrator | 00:01:31.925 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.925831 | orchestrator | 00:01:31.925 STDOUT terraform:  + size = 20 2025-09-13 00:01:31.925862 | orchestrator | 00:01:31.925 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-13 00:01:31.925881 | orchestrator | 00:01:31.925 STDOUT terraform:  + volume_type = "ssd" 2025-09-13 00:01:31.925894 | orchestrator | 00:01:31.925 STDOUT terraform:  } 2025-09-13 00:01:31.925938 | orchestrator | 00:01:31.925 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-09-13 00:01:31.925979 | orchestrator | 00:01:31.925 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-13 00:01:31.926032 | orchestrator | 00:01:31.925 STDOUT terraform:  + attachment = (known after apply) 2025-09-13 00:01:31.926117 | orchestrator | 00:01:31.926 STDOUT terraform:  + availability_zone = "nova" 2025-09-13 00:01:31.926745 | orchestrator | 00:01:31.926 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.927297 | orchestrator | 00:01:31.926 STDOUT terraform:  + metadata = (known after apply) 2025-09-13 00:01:31.927730 | orchestrator | 00:01:31.927 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-09-13 00:01:31.928070 | orchestrator | 00:01:31.927 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.928352 | orchestrator | 00:01:31.928 STDOUT terraform:  + size = 20 2025-09-13 00:01:31.929008 | orchestrator | 00:01:31.928 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-13 00:01:31.929441 | orchestrator | 00:01:31.929 STDOUT terraform:  + volume_type = "ssd" 2025-09-13 00:01:31.929758 | orchestrator | 00:01:31.929 STDOUT terraform:  } 2025-09-13 00:01:31.929841 | orchestrator | 00:01:31.929 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-09-13 00:01:31.929884 | orchestrator | 00:01:31.929 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-13 00:01:31.929919 | orchestrator | 00:01:31.929 STDOUT terraform:  + attachment = (known after apply) 2025-09-13 00:01:31.929941 | orchestrator | 00:01:31.929 STDOUT terraform:  + availability_zone = "nova" 2025-09-13 00:01:31.929976 | orchestrator | 00:01:31.929 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.930011 | orchestrator | 00:01:31.929 STDOUT terraform:  + metadata = (known after apply) 2025-09-13 00:01:31.930063 | orchestrator | 00:01:31.930 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-09-13 00:01:31.930096 | orchestrator | 00:01:31.930 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.930112 | orchestrator | 00:01:31.930 STDOUT terraform:  + size = 20 2025-09-13 00:01:31.930135 | orchestrator | 00:01:31.930 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-13 00:01:31.930159 | orchestrator | 00:01:31.930 STDOUT terraform:  + volume_type = "ssd" 2025-09-13 00:01:31.930173 | orchestrator | 00:01:31.930 STDOUT terraform:  } 2025-09-13 00:01:31.930220 | orchestrator | 00:01:31.930 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-09-13 00:01:31.930266 | orchestrator | 00:01:31.930 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-13 00:01:31.930301 | orchestrator | 00:01:31.930 STDOUT terraform:  + attachment = (known after apply) 2025-09-13 00:01:31.930323 | orchestrator | 00:01:31.930 STDOUT terraform:  + availability_zone = "nova" 2025-09-13 00:01:31.930359 | orchestrator | 00:01:31.930 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.930393 | orchestrator | 00:01:31.930 STDOUT terraform:  + metadata = (known after apply) 2025-09-13 00:01:31.930429 | orchestrator | 00:01:31.930 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-09-13 00:01:31.930463 | orchestrator | 00:01:31.930 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.930490 | orchestrator | 00:01:31.930 STDOUT terraform:  + size = 20 2025-09-13 00:01:31.930510 | orchestrator | 00:01:31.930 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-13 00:01:31.930534 | orchestrator | 00:01:31.930 STDOUT terraform:  + volume_type = "ssd" 2025-09-13 00:01:31.930548 | orchestrator | 00:01:31.930 STDOUT terraform:  } 2025-09-13 00:01:31.930591 | orchestrator | 00:01:31.930 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-09-13 00:01:31.930635 | orchestrator | 00:01:31.930 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-13 00:01:31.930669 | orchestrator | 00:01:31.930 STDOUT terraform:  + attachment = (known after apply) 2025-09-13 00:01:31.930756 | orchestrator | 00:01:31.930 STDOUT terraform:  + availability_zone = "nova" 2025-09-13 00:01:31.930764 | orchestrator | 00:01:31.930 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.930769 | orchestrator | 00:01:31.930 STDOUT terraform:  + metadata = (known after apply) 2025-09-13 00:01:31.930801 | orchestrator | 00:01:31.930 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-09-13 00:01:31.930835 | orchestrator | 00:01:31.930 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.930855 | orchestrator | 00:01:31.930 STDOUT terraform:  + size = 20 2025-09-13 00:01:31.930878 | orchestrator | 00:01:31.930 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-13 00:01:31.930902 | orchestrator | 00:01:31.930 STDOUT terraform:  + volume_type = "ssd" 2025-09-13 00:01:31.930909 | orchestrator | 00:01:31.930 STDOUT terraform:  } 2025-09-13 00:01:31.930955 | orchestrator | 00:01:31.930 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-09-13 00:01:31.930996 | orchestrator | 00:01:31.930 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-13 00:01:31.931033 | orchestrator | 00:01:31.930 STDOUT terraform:  + attachment = (known after apply) 2025-09-13 00:01:31.931058 | orchestrator | 00:01:31.931 STDOUT terraform:  + availability_zone = "nova" 2025-09-13 00:01:31.931094 | orchestrator | 00:01:31.931 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.931129 | orchestrator | 00:01:31.931 STDOUT terraform:  + metadata = (known after apply) 2025-09-13 00:01:31.931168 | orchestrator | 00:01:31.931 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-09-13 00:01:31.931202 | orchestrator | 00:01:31.931 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.931222 | orchestrator | 00:01:31.931 STDOUT terraform:  + size = 20 2025-09-13 00:01:31.931248 | orchestrator | 00:01:31.931 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-13 00:01:31.931270 | orchestrator | 00:01:31.931 STDOUT terraform:  + volume_type = "ssd" 2025-09-13 00:01:31.931276 | orchestrator | 00:01:31.931 STDOUT terraform:  } 2025-09-13 00:01:31.931322 | orchestrator | 00:01:31.931 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-09-13 00:01:31.931363 | orchestrator | 00:01:31.931 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-13 00:01:31.931397 | orchestrator | 00:01:31.931 STDOUT terraform:  + attachment = (known after apply) 2025-09-13 00:01:31.931420 | orchestrator | 00:01:31.931 STDOUT terraform:  + availability_zone = "nova" 2025-09-13 00:01:31.931455 | orchestrator | 00:01:31.931 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.931506 | orchestrator | 00:01:31.931 STDOUT terraform:  + metadata = (known after apply) 2025-09-13 00:01:31.931552 | orchestrator | 00:01:31.931 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-09-13 00:01:31.931587 | orchestrator | 00:01:31.931 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.931608 | orchestrator | 00:01:31.931 STDOUT terraform:  + size = 20 2025-09-13 00:01:31.931631 | orchestrator | 00:01:31.931 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-13 00:01:31.931654 | orchestrator | 00:01:31.931 STDOUT terraform:  + volume_type = "ssd" 2025-09-13 00:01:31.931661 | orchestrator | 00:01:31.931 STDOUT terraform:  } 2025-09-13 00:01:31.931734 | orchestrator | 00:01:31.931 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-09-13 00:01:31.931772 | orchestrator | 00:01:31.931 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-09-13 00:01:31.931804 | orchestrator | 00:01:31.931 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-13 00:01:31.931839 | orchestrator | 00:01:31.931 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-13 00:01:31.931874 | orchestrator | 00:01:31.931 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-13 00:01:31.931908 | orchestrator | 00:01:31.931 STDOUT terraform:  + all_tags = (known after apply) 2025-09-13 00:01:31.931931 | orchestrator | 00:01:31.931 STDOUT terraform:  + availability_zone = "nova" 2025-09-13 00:01:31.931952 | orchestrator | 00:01:31.931 STDOUT terraform:  + config_drive = true 2025-09-13 00:01:31.931986 | orchestrator | 00:01:31.931 STDOUT terraform:  + created = (known after apply) 2025-09-13 00:01:31.932022 | orchestrator | 00:01:31.931 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-13 00:01:31.932051 | orchestrator | 00:01:31.932 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-09-13 00:01:31.932074 | orchestrator | 00:01:31.932 STDOUT terraform:  + force_delete = false 2025-09-13 00:01:31.932106 | orchestrator | 00:01:31.932 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-13 00:01:31.932140 | orchestrator | 00:01:31.932 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.932175 | orchestrator | 00:01:31.932 STDOUT terraform:  + image_id = (known after apply) 2025-09-13 00:01:31.932208 | orchestrator | 00:01:31.932 STDOUT terraform:  + image_name = (known after apply) 2025-09-13 00:01:31.932233 | orchestrator | 00:01:31.932 STDOUT terraform:  + key_pair = "testbed" 2025-09-13 00:01:31.932263 | orchestrator | 00:01:31.932 STDOUT terraform:  + name = "testbed-manager" 2025-09-13 00:01:31.932287 | orchestrator | 00:01:31.932 STDOUT terraform:  + power_state = "active" 2025-09-13 00:01:31.932321 | orchestrator | 00:01:31.932 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.932375 | orchestrator | 00:01:31.932 STDOUT terraform:  + security_groups = (known after apply) 2025-09-13 00:01:31.932402 | orchestrator | 00:01:31.932 STDOUT terraform:  + stop_before_destroy = false 2025-09-13 00:01:31.932437 | orchestrator | 00:01:31.932 STDOUT terraform:  + updated = (known after apply) 2025-09-13 00:01:31.932466 | orchestrator | 00:01:31.932 STDOUT terraform:  + user_data = (sensitive value) 2025-09-13 00:01:31.932483 | orchestrator | 00:01:31.932 STDOUT terraform:  + block_device { 2025-09-13 00:01:31.932508 | orchestrator | 00:01:31.932 STDOUT terraform:  + boot_index = 0 2025-09-13 00:01:31.932534 | orchestrator | 00:01:31.932 STDOUT terraform:  + delete_on_termination = false 2025-09-13 00:01:31.932561 | orchestrator | 00:01:31.932 STDOUT terraform:  + destination_type = "volume" 2025-09-13 00:01:31.932589 | orchestrator | 00:01:31.932 STDOUT terraform:  + multiattach = false 2025-09-13 00:01:31.932618 | orchestrator | 00:01:31.932 STDOUT terraform:  + source_type = "volume" 2025-09-13 00:01:31.932654 | orchestrator | 00:01:31.932 STDOUT terraform:  + uuid = (known after apply) 2025-09-13 00:01:31.932668 | orchestrator | 00:01:31.932 STDOUT terraform:  } 2025-09-13 00:01:31.932683 | orchestrator | 00:01:31.932 STDOUT terraform:  + network { 2025-09-13 00:01:31.932728 | orchestrator | 00:01:31.932 STDOUT terraform:  + access_network = false 2025-09-13 00:01:31.932735 | orchestrator | 00:01:31.932 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-13 00:01:31.932768 | orchestrator | 00:01:31.932 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-13 00:01:31.932799 | orchestrator | 00:01:31.932 STDOUT terraform:  + mac = (known after apply) 2025-09-13 00:01:31.932831 | orchestrator | 00:01:31.932 STDOUT terraform:  + name = (known after apply) 2025-09-13 00:01:31.932863 | orchestrator | 00:01:31.932 STDOUT terraform:  + port = (known after apply) 2025-09-13 00:01:31.932892 | orchestrator | 00:01:31.932 STDOUT terraform:  + uuid = (known after apply) 2025-09-13 00:01:31.932905 | orchestrator | 00:01:31.932 STDOUT terraform:  } 2025-09-13 00:01:31.932919 | orchestrator | 00:01:31.932 STDOUT terraform:  } 2025-09-13 00:01:31.932961 | orchestrator | 00:01:31.932 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-09-13 00:01:31.933001 | orchestrator | 00:01:31.932 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-13 00:01:31.933034 | orchestrator | 00:01:31.932 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-13 00:01:31.933071 | orchestrator | 00:01:31.933 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-13 00:01:31.933100 | orchestrator | 00:01:31.933 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-13 00:01:31.933133 | orchestrator | 00:01:31.933 STDOUT terraform:  + all_tags = (known after apply) 2025-09-13 00:01:31.933155 | orchestrator | 00:01:31.933 STDOUT terraform:  + availability_zone = "nova" 2025-09-13 00:01:31.933175 | orchestrator | 00:01:31.933 STDOUT terraform:  + config_drive = true 2025-09-13 00:01:31.933211 | orchestrator | 00:01:31.933 STDOUT terraform:  + created = (known after apply) 2025-09-13 00:01:31.933247 | orchestrator | 00:01:31.933 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-13 00:01:31.933274 | orchestrator | 00:01:31.933 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-13 00:01:31.933299 | orchestrator | 00:01:31.933 STDOUT terraform:  + force_delete = false 2025-09-13 00:01:31.933331 | orchestrator | 00:01:31.933 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-13 00:01:31.933433 | orchestrator | 00:01:31.933 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.933506 | orchestrator | 00:01:31.933 STDOUT terraform:  + image_id = (known after apply) 2025-09-13 00:01:31.933546 | orchestrator | 00:01:31.933 STDOUT terraform:  + image_name = (known after apply) 2025-09-13 00:01:31.933572 | orchestrator | 00:01:31.933 STDOUT terraform:  + key_pair = "testbed" 2025-09-13 00:01:31.933603 | orchestrator | 00:01:31.933 STDOUT terraform:  + name = "testbed-node-0" 2025-09-13 00:01:31.933627 | orchestrator | 00:01:31.933 STDOUT terraform:  + power_state = "active" 2025-09-13 00:01:31.933662 | orchestrator | 00:01:31.933 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.933716 | orchestrator | 00:01:31.933 STDOUT terraform:  + security_groups = (known after apply) 2025-09-13 00:01:31.933733 | orchestrator | 00:01:31.933 STDOUT terraform:  + stop_before_destroy = false 2025-09-13 00:01:31.933768 | orchestrator | 00:01:31.933 STDOUT terraform:  + updated = (known after apply) 2025-09-13 00:01:31.933820 | orchestrator | 00:01:31.933 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-13 00:01:31.933838 | orchestrator | 00:01:31.933 STDOUT terraform:  + block_device { 2025-09-13 00:01:31.933864 | orchestrator | 00:01:31.933 STDOUT terraform:  + boot_index = 0 2025-09-13 00:01:31.933892 | orchestrator | 00:01:31.933 STDOUT terraform:  + delete_on_termination = false 2025-09-13 00:01:31.933920 | orchestrator | 00:01:31.933 STDOUT terraform:  + destination_type = "volume" 2025-09-13 00:01:31.933951 | orchestrator | 00:01:31.933 STDOUT terraform:  + multiattach = false 2025-09-13 00:01:31.933979 | orchestrator | 00:01:31.933 STDOUT terraform:  + source_type = "volume" 2025-09-13 00:01:31.934030 | orchestrator | 00:01:31.933 STDOUT terraform:  + uuid = (known after apply) 2025-09-13 00:01:31.934038 | orchestrator | 00:01:31.934 STDOUT terraform:  } 2025-09-13 00:01:31.934057 | orchestrator | 00:01:31.934 STDOUT terraform:  + network { 2025-09-13 00:01:31.934077 | orchestrator | 00:01:31.934 STDOUT terraform:  + access_network = false 2025-09-13 00:01:31.934108 | orchestrator | 00:01:31.934 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-13 00:01:31.934142 | orchestrator | 00:01:31.934 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-13 00:01:31.934173 | orchestrator | 00:01:31.934 STDOUT terraform:  + mac = (known after apply) 2025-09-13 00:01:31.934205 | orchestrator | 00:01:31.934 STDOUT terraform:  + name = (known after apply) 2025-09-13 00:01:31.934235 | orchestrator | 00:01:31.934 STDOUT terraform:  + port = (known after apply) 2025-09-13 00:01:31.934266 | orchestrator | 00:01:31.934 STDOUT terraform:  + uuid = (known after apply) 2025-09-13 00:01:31.934285 | orchestrator | 00:01:31.934 STDOUT terraform:  } 2025-09-13 00:01:31.934299 | orchestrator | 00:01:31.934 STDOUT terraform:  } 2025-09-13 00:01:31.934343 | orchestrator | 00:01:31.934 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-09-13 00:01:31.934409 | orchestrator | 00:01:31.934 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-13 00:01:31.934442 | orchestrator | 00:01:31.934 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-13 00:01:31.934480 | orchestrator | 00:01:31.934 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-13 00:01:31.934515 | orchestrator | 00:01:31.934 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-13 00:01:31.934549 | orchestrator | 00:01:31.934 STDOUT terraform:  + all_tags = (known after apply) 2025-09-13 00:01:31.934573 | orchestrator | 00:01:31.934 STDOUT terraform:  + availability_zone = "nova" 2025-09-13 00:01:31.934596 | orchestrator | 00:01:31.934 STDOUT terraform:  + config_drive = true 2025-09-13 00:01:31.934630 | orchestrator | 00:01:31.934 STDOUT terraform:  + created = (known after apply) 2025-09-13 00:01:31.934665 | orchestrator | 00:01:31.934 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-13 00:01:31.934720 | orchestrator | 00:01:31.934 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-13 00:01:31.934749 | orchestrator | 00:01:31.934 STDOUT terraform:  + force_delete = false 2025-09-13 00:01:31.934789 | orchestrator | 00:01:31.934 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-13 00:01:31.934816 | orchestrator | 00:01:31.934 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.934853 | orchestrator | 00:01:31.934 STDOUT terraform:  + image_id = (known after apply) 2025-09-13 00:01:31.934885 | orchestrator | 00:01:31.934 STDOUT terraform:  + image_name = (known after apply) 2025-09-13 00:01:31.934909 | orchestrator | 00:01:31.934 STDOUT terraform:  + key_pair = "testbed" 2025-09-13 00:01:31.934939 | orchestrator | 00:01:31.934 STDOUT terraform:  + name = "testbed-node-1" 2025-09-13 00:01:31.934962 | orchestrator | 00:01:31.934 STDOUT terraform:  + power_state = "active" 2025-09-13 00:01:31.934996 | orchestrator | 00:01:31.934 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.935032 | orchestrator | 00:01:31.934 STDOUT terraform:  + security_groups = (known after apply) 2025-09-13 00:01:31.935055 | orchestrator | 00:01:31.935 STDOUT terraform:  + stop_before_destroy = false 2025-09-13 00:01:31.935091 | orchestrator | 00:01:31.935 STDOUT terraform:  + updated = (known after apply) 2025-09-13 00:01:31.935140 | orchestrator | 00:01:31.935 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-13 00:01:31.935156 | orchestrator | 00:01:31.935 STDOUT terraform:  + block_device { 2025-09-13 00:01:31.935186 | orchestrator | 00:01:31.935 STDOUT terraform:  + boot_index = 0 2025-09-13 00:01:31.935213 | orchestrator | 00:01:31.935 STDOUT terraform:  + delete_on_termination = false 2025-09-13 00:01:31.935237 | orchestrator | 00:01:31.935 STDOUT terraform:  + destination_type = "volume" 2025-09-13 00:01:31.935265 | orchestrator | 00:01:31.935 STDOUT terraform:  + multiattach = false 2025-09-13 00:01:31.935294 | orchestrator | 00:01:31.935 STDOUT terraform:  + source_type = "volume" 2025-09-13 00:01:31.935355 | orchestrator | 00:01:31.935 STDOUT terraform:  + uuid = (known after apply) 2025-09-13 00:01:31.935371 | orchestrator | 00:01:31.935 STDOUT terraform:  } 2025-09-13 00:01:31.941104 | orchestrator | 00:01:31.935 STDOUT terraform:  + network { 2025-09-13 00:01:31.941128 | orchestrator | 00:01:31.938 STDOUT terraform:  + access_network = false 2025-09-13 00:01:31.941133 | orchestrator | 00:01:31.938 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-13 00:01:31.941137 | orchestrator | 00:01:31.938 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-13 00:01:31.941141 | orchestrator | 00:01:31.938 STDOUT terraform:  + mac = (known after apply) 2025-09-13 00:01:31.941145 | orchestrator | 00:01:31.938 STDOUT terraform:  + name = (known after apply) 2025-09-13 00:01:31.941149 | orchestrator | 00:01:31.938 STDOUT terraform:  + port = (known after apply) 2025-09-13 00:01:31.941153 | orchestrator | 00:01:31.938 STDOUT terraform:  + uuid = (known after apply) 2025-09-13 00:01:31.941157 | orchestrator | 00:01:31.938 STDOUT terraform:  } 2025-09-13 00:01:31.941161 | orchestrator | 00:01:31.938 STDOUT terraform:  } 2025-09-13 00:01:31.941165 | orchestrator | 00:01:31.938 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-09-13 00:01:31.941169 | orchestrator | 00:01:31.938 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-13 00:01:31.941181 | orchestrator | 00:01:31.938 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-13 00:01:31.941185 | orchestrator | 00:01:31.938 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-13 00:01:31.941189 | orchestrator | 00:01:31.938 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-13 00:01:31.941193 | orchestrator | 00:01:31.938 STDOUT terraform:  + all_tags = (known after apply) 2025-09-13 00:01:31.941197 | orchestrator | 00:01:31.938 STDOUT terraform:  + availability_zone = "nova" 2025-09-13 00:01:31.941200 | orchestrator | 00:01:31.938 STDOUT terraform:  + config_drive = true 2025-09-13 00:01:31.941204 | orchestrator | 00:01:31.938 STDOUT terraform:  + created = (known after apply) 2025-09-13 00:01:31.941208 | orchestrator | 00:01:31.938 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-13 00:01:31.941212 | orchestrator | 00:01:31.938 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-13 00:01:31.941216 | orchestrator | 00:01:31.938 STDOUT terraform:  + force_delete = false 2025-09-13 00:01:31.941220 | orchestrator | 00:01:31.938 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-13 00:01:31.941224 | orchestrator | 00:01:31.938 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.941228 | orchestrator | 00:01:31.938 STDOUT terraform:  + image_id = (known after apply) 2025-09-13 00:01:31.941232 | orchestrator | 00:01:31.938 STDOUT terraform:  + image_name = (known after apply) 2025-09-13 00:01:31.941235 | orchestrator | 00:01:31.938 STDOUT terraform:  + key_pair = "testbed" 2025-09-13 00:01:31.941239 | orchestrator | 00:01:31.938 STDOUT terraform:  + name = "testbed-node-2" 2025-09-13 00:01:31.941243 | orchestrator | 00:01:31.938 STDOUT terraform:  + power_state = "active" 2025-09-13 00:01:31.941247 | orchestrator | 00:01:31.938 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.941251 | orchestrator | 00:01:31.938 STDOUT terraform:  + security_groups = (known after apply) 2025-09-13 00:01:31.941254 | orchestrator | 00:01:31.938 STDOUT terraform:  + stop_before_destroy = false 2025-09-13 00:01:31.941258 | orchestrator | 00:01:31.938 STDOUT terraform:  + updated = (known after apply) 2025-09-13 00:01:31.941262 | orchestrator | 00:01:31.938 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-13 00:01:31.941266 | orchestrator | 00:01:31.938 STDOUT terraform:  + block_device { 2025-09-13 00:01:31.941270 | orchestrator | 00:01:31.939 STDOUT terraform:  + boot_index = 0 2025-09-13 00:01:31.941274 | orchestrator | 00:01:31.939 STDOUT terraform:  + delete_on_termination = false 2025-09-13 00:01:31.941291 | orchestrator | 00:01:31.939 STDOUT terraform:  + destination_type = "volume" 2025-09-13 00:01:31.941295 | orchestrator | 00:01:31.939 STDOUT terraform:  + multiattach = false 2025-09-13 00:01:31.941299 | orchestrator | 00:01:31.939 STDOUT terraform:  + source_type = "volume" 2025-09-13 00:01:31.941303 | orchestrator | 00:01:31.939 STDOUT terraform:  + uuid = (known after apply) 2025-09-13 00:01:31.941310 | orchestrator | 00:01:31.939 STDOUT terraform:  } 2025-09-13 00:01:31.941314 | orchestrator | 00:01:31.939 STDOUT terraform:  + network { 2025-09-13 00:01:31.941318 | orchestrator | 00:01:31.939 STDOUT terraform:  + access_network = false 2025-09-13 00:01:31.941322 | orchestrator | 00:01:31.939 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-13 00:01:31.941326 | orchestrator | 00:01:31.939 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-13 00:01:31.941329 | orchestrator | 00:01:31.939 STDOUT terraform:  + mac = (known after apply) 2025-09-13 00:01:31.941333 | orchestrator | 00:01:31.939 STDOUT terraform:  + name = (known after apply) 2025-09-13 00:01:31.941337 | orchestrator | 00:01:31.939 STDOUT terraform:  + port = (known after apply) 2025-09-13 00:01:31.941341 | orchestrator | 00:01:31.939 STDOUT terraform:  + uuid = (known after apply) 2025-09-13 00:01:31.941345 | orchestrator | 00:01:31.939 STDOUT terraform:  } 2025-09-13 00:01:31.941350 | orchestrator | 00:01:31.939 STDOUT terraform:  } 2025-09-13 00:01:31.941354 | orchestrator | 00:01:31.939 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-09-13 00:01:31.941358 | orchestrator | 00:01:31.939 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-13 00:01:31.941363 | orchestrator | 00:01:31.939 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-13 00:01:31.941366 | orchestrator | 00:01:31.939 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-13 00:01:31.941370 | orchestrator | 00:01:31.939 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-13 00:01:31.941374 | orchestrator | 00:01:31.939 STDOUT terraform:  + all_tags = (known after apply) 2025-09-13 00:01:31.941378 | orchestrator | 00:01:31.939 STDOUT terraform:  + availability_zone = "nova" 2025-09-13 00:01:31.941382 | orchestrator | 00:01:31.939 STDOUT terraform:  + config_drive = true 2025-09-13 00:01:31.941385 | orchestrator | 00:01:31.939 STDOUT terraform:  + created = (known after apply) 2025-09-13 00:01:31.941389 | orchestrator | 00:01:31.939 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-13 00:01:31.941393 | orchestrator | 00:01:31.939 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-13 00:01:31.941397 | orchestrator | 00:01:31.939 STDOUT terraform:  + force_delete = false 2025-09-13 00:01:31.941401 | orchestrator | 00:01:31.939 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-13 00:01:31.941404 | orchestrator | 00:01:31.939 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.941408 | orchestrator | 00:01:31.939 STDOUT terraform:  + image_id = (known after apply) 2025-09-13 00:01:31.941412 | orchestrator | 00:01:31.939 STDOUT terraform:  + image_name = (known after apply) 2025-09-13 00:01:31.941416 | orchestrator | 00:01:31.939 STDOUT terraform:  + key_pair = "testbed" 2025-09-13 00:01:31.941419 | orchestrator | 00:01:31.939 STDOUT terraform:  + name = "testbed-node-3" 2025-09-13 00:01:31.941423 | orchestrator | 00:01:31.939 STDOUT terraform:  + power_state = "active" 2025-09-13 00:01:31.941433 | orchestrator | 00:01:31.939 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.941436 | orchestrator | 00:01:31.939 STDOUT terraform:  + security_groups = (known after apply) 2025-09-13 00:01:31.941440 | orchestrator | 00:01:31.939 STDOUT terraform:  + stop_before_destroy = false 2025-09-13 00:01:31.941444 | orchestrator | 00:01:31.940 STDOUT terraform:  + updated = (known after apply) 2025-09-13 00:01:31.941451 | orchestrator | 00:01:31.940 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-13 00:01:31.941455 | orchestrator | 00:01:31.940 STDOUT terraform:  + block_device { 2025-09-13 00:01:31.941459 | orchestrator | 00:01:31.940 STDOUT terraform:  + boot_index = 0 2025-09-13 00:01:31.941463 | orchestrator | 00:01:31.940 STDOUT terraform:  + delete_on_termination = false 2025-09-13 00:01:31.941466 | orchestrator | 00:01:31.940 STDOUT terraform:  + destination_type = "volume" 2025-09-13 00:01:31.941470 | orchestrator | 00:01:31.940 STDOUT terraform:  + multiattach = false 2025-09-13 00:01:31.941474 | orchestrator | 00:01:31.940 STDOUT terraform:  + source_type = "volume" 2025-09-13 00:01:31.941478 | orchestrator | 00:01:31.940 STDOUT terraform:  + uuid = (known after apply) 2025-09-13 00:01:31.941482 | orchestrator | 00:01:31.940 STDOUT terraform:  } 2025-09-13 00:01:31.941485 | orchestrator | 00:01:31.940 STDOUT terraform:  + network { 2025-09-13 00:01:31.941489 | orchestrator | 00:01:31.940 STDOUT terraform:  + access_network = false 2025-09-13 00:01:31.941493 | orchestrator | 00:01:31.940 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-13 00:01:31.941497 | orchestrator | 00:01:31.940 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-13 00:01:31.941501 | orchestrator | 00:01:31.940 STDOUT terraform:  + mac = (known after apply) 2025-09-13 00:01:31.941504 | orchestrator | 00:01:31.940 STDOUT terraform:  + name = (known after apply) 2025-09-13 00:01:31.941508 | orchestrator | 00:01:31.940 STDOUT terraform:  + port = (known after apply) 2025-09-13 00:01:31.941512 | orchestrator | 00:01:31.940 STDOUT terraform:  + uuid = (known after apply) 2025-09-13 00:01:31.941516 | orchestrator | 00:01:31.940 STDOUT terraform:  } 2025-09-13 00:01:31.941520 | orchestrator | 00:01:31.940 STDOUT terraform:  } 2025-09-13 00:01:31.941524 | orchestrator | 00:01:31.940 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-09-13 00:01:31.941527 | orchestrator | 00:01:31.940 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-13 00:01:31.941531 | orchestrator | 00:01:31.940 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-13 00:01:31.941535 | orchestrator | 00:01:31.940 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-13 00:01:31.941539 | orchestrator | 00:01:31.940 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-13 00:01:31.941543 | orchestrator | 00:01:31.940 STDOUT terraform:  + all_tags = (known after apply) 2025-09-13 00:01:31.941546 | orchestrator | 00:01:31.940 STDOUT terraform:  + availability_zone = "nova" 2025-09-13 00:01:31.941554 | orchestrator | 00:01:31.940 STDOUT terraform:  + config_drive = true 2025-09-13 00:01:31.941561 | orchestrator | 00:01:31.940 STDOUT terraform:  + created = (known after apply) 2025-09-13 00:01:31.941565 | orchestrator | 00:01:31.940 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-13 00:01:31.941569 | orchestrator | 00:01:31.940 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-13 00:01:31.941572 | orchestrator | 00:01:31.940 STDOUT terraform:  + force_delete = false 2025-09-13 00:01:31.941576 | orchestrator | 00:01:31.940 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-13 00:01:31.941580 | orchestrator | 00:01:31.940 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.941584 | orchestrator | 00:01:31.940 STDOUT terraform:  + image_id = (known after apply) 2025-09-13 00:01:31.941591 | orchestrator | 00:01:31.940 STDOUT terraform:  + image_name = (known after apply) 2025-09-13 00:01:31.941594 | orchestrator | 00:01:31.940 STDOUT terraform:  + key_pair = "testbed" 2025-09-13 00:01:31.941598 | orchestrator | 00:01:31.940 STDOUT terraform:  + name = "testbed-node-4" 2025-09-13 00:01:31.941602 | orchestrator | 00:01:31.940 STDOUT terraform:  + power_state = "active" 2025-09-13 00:01:31.941606 | orchestrator | 00:01:31.940 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.941612 | orchestrator | 00:01:31.940 STDOUT terraform:  + security_groups = (known after apply) 2025-09-13 00:01:31.941616 | orchestrator | 00:01:31.941 STDOUT terraform:  + stop_before_destroy = false 2025-09-13 00:01:31.941620 | orchestrator | 00:01:31.941 STDOUT terraform:  + updated = (known after apply) 2025-09-13 00:01:31.941624 | orchestrator | 00:01:31.941 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-13 00:01:31.941628 | orchestrator | 00:01:31.941 STDOUT terraform:  + block_device { 2025-09-13 00:01:31.941632 | orchestrator | 00:01:31.941 STDOUT terraform:  + boot_index = 0 2025-09-13 00:01:31.941635 | orchestrator | 00:01:31.941 STDOUT terraform:  + delete_on_termination = false 2025-09-13 00:01:31.941639 | orchestrator | 00:01:31.941 STDOUT terraform:  + destination_type = "volume" 2025-09-13 00:01:31.941643 | orchestrator | 00:01:31.941 STDOUT terraform:  + multiattach = false 2025-09-13 00:01:31.941647 | orchestrator | 00:01:31.941 STDOUT terraform:  + source_type = "volume" 2025-09-13 00:01:31.941650 | orchestrator | 00:01:31.941 STDOUT terraform:  + uuid = (known after apply) 2025-09-13 00:01:31.941654 | orchestrator | 00:01:31.941 STDOUT terraform:  } 2025-09-13 00:01:31.941658 | orchestrator | 00:01:31.941 STDOUT terraform:  + network { 2025-09-13 00:01:31.941662 | orchestrator | 00:01:31.941 STDOUT terraform:  + access_network = false 2025-09-13 00:01:31.941666 | orchestrator | 00:01:31.941 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-13 00:01:31.941669 | orchestrator | 00:01:31.941 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-13 00:01:31.941673 | orchestrator | 00:01:31.941 STDOUT terraform:  + mac = (known after apply) 2025-09-13 00:01:31.941680 | orchestrator | 00:01:31.941 STDOUT terraform:  + name = (known after apply) 2025-09-13 00:01:31.941684 | orchestrator | 00:01:31.941 STDOUT terraform:  + port = (known after apply) 2025-09-13 00:01:31.941688 | orchestrator | 00:01:31.941 STDOUT terraform:  + uuid = (known after apply) 2025-09-13 00:01:31.941692 | orchestrator | 00:01:31.941 STDOUT terraform:  } 2025-09-13 00:01:31.941712 | orchestrator | 00:01:31.941 STDOUT terraform:  } 2025-09-13 00:01:31.941717 | orchestrator | 00:01:31.941 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-09-13 00:01:31.941721 | orchestrator | 00:01:31.941 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-13 00:01:31.941725 | orchestrator | 00:01:31.941 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-13 00:01:31.941728 | orchestrator | 00:01:31.941 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-13 00:01:31.941734 | orchestrator | 00:01:31.941 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-13 00:01:31.941738 | orchestrator | 00:01:31.941 STDOUT terraform:  + all_tags = (known after apply) 2025-09-13 00:01:31.941742 | orchestrator | 00:01:31.941 STDOUT terraform:  + availability_zone = "nova" 2025-09-13 00:01:31.941746 | orchestrator | 00:01:31.941 STDOUT terraform:  + config_drive = true 2025-09-13 00:01:31.941751 | orchestrator | 00:01:31.941 STDOUT terraform:  + created = (known after apply) 2025-09-13 00:01:31.941785 | orchestrator | 00:01:31.941 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-13 00:01:31.941812 | orchestrator | 00:01:31.941 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-13 00:01:31.941834 | orchestrator | 00:01:31.941 STDOUT terraform:  + force_delete = false 2025-09-13 00:01:31.941866 | orchestrator | 00:01:31.941 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-13 00:01:31.941901 | orchestrator | 00:01:31.941 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.941945 | orchestrator | 00:01:31.941 STDOUT terraform:  + image_id = (known after apply) 2025-09-13 00:01:31.941982 | orchestrator | 00:01:31.941 STDOUT terraform:  + image_name = (known after apply) 2025-09-13 00:01:31.942009 | orchestrator | 00:01:31.941 STDOUT terraform:  + key_pair = "testbed" 2025-09-13 00:01:31.942330 | orchestrator | 00:01:31.942 STDOUT terraform:  + name = "testbed-node-5" 2025-09-13 00:01:31.942578 | orchestrator | 00:01:31.942 STDOUT terraform:  + power_state = "active" 2025-09-13 00:01:31.943202 | orchestrator | 00:01:31.942 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.943972 | orchestrator | 00:01:31.943 STDOUT terraform:  + security_groups = (known after apply) 2025-09-13 00:01:31.944435 | orchestrator | 00:01:31.944 STDOUT terraform:  + stop_before_destroy = false 2025-09-13 00:01:31.944790 | orchestrator | 00:01:31.944 STDOUT terraform:  + updated = (known after apply) 2025-09-13 00:01:31.945434 | orchestrator | 00:01:31.944 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-13 00:01:31.945661 | orchestrator | 00:01:31.945 STDOUT terraform:  + block_device { 2025-09-13 00:01:31.946088 | orchestrator | 00:01:31.945 STDOUT terraform:  + boot_index = 0 2025-09-13 00:01:31.946571 | orchestrator | 00:01:31.946 STDOUT terraform:  + delete_on_termination = false 2025-09-13 00:01:31.946757 | orchestrator | 00:01:31.946 STDOUT terraform:  + destination_type = "volume" 2025-09-13 00:01:31.946913 | orchestrator | 00:01:31.946 STDOUT terraform:  + multiattach = false 2025-09-13 00:01:31.947236 | orchestrator | 00:01:31.946 STDOUT terraform:  + source_type = "volume" 2025-09-13 00:01:31.947908 | orchestrator | 00:01:31.947 STDOUT terraform:  + uuid = (known after apply) 2025-09-13 00:01:31.948004 | orchestrator | 00:01:31.947 STDOUT terraform:  } 2025-09-13 00:01:31.948137 | orchestrator | 00:01:31.948 STDOUT terraform:  + network { 2025-09-13 00:01:31.948365 | orchestrator | 00:01:31.948 STDOUT terraform:  + access_network = false 2025-09-13 00:01:31.948843 | orchestrator | 00:01:31.948 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-13 00:01:31.949026 | orchestrator | 00:01:31.948 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-13 00:01:31.949057 | orchestrator | 00:01:31.949 STDOUT terraform:  + mac = (known after apply) 2025-09-13 00:01:31.949088 | orchestrator | 00:01:31.949 STDOUT terraform:  + name = (known after apply) 2025-09-13 00:01:31.949124 | orchestrator | 00:01:31.949 STDOUT terraform:  + port = (known after apply) 2025-09-13 00:01:31.949151 | orchestrator | 00:01:31.949 STDOUT terraform:  + uuid = (known after apply) 2025-09-13 00:01:31.949158 | orchestrator | 00:01:31.949 STDOUT terraform:  } 2025-09-13 00:01:31.949175 | orchestrator | 00:01:31.949 STDOUT terraform:  } 2025-09-13 00:01:31.949210 | orchestrator | 00:01:31.949 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-09-13 00:01:31.949245 | orchestrator | 00:01:31.949 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-09-13 00:01:31.949273 | orchestrator | 00:01:31.949 STDOUT terraform:  + fingerprint = (known after apply) 2025-09-13 00:01:31.949302 | orchestrator | 00:01:31.949 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.949321 | orchestrator | 00:01:31.949 STDOUT terraform:  + name = "testbed" 2025-09-13 00:01:31.949345 | orchestrator | 00:01:31.949 STDOUT terraform:  + private_key = (sensitive value) 2025-09-13 00:01:31.949373 | orchestrator | 00:01:31.949 STDOUT terraform:  + public_key = (known after apply) 2025-09-13 00:01:31.949401 | orchestrator | 00:01:31.949 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.949431 | orchestrator | 00:01:31.949 STDOUT terraform:  + user_id = (known after apply) 2025-09-13 00:01:31.949437 | orchestrator | 00:01:31.949 STDOUT terraform:  } 2025-09-13 00:01:31.949486 | orchestrator | 00:01:31.949 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-09-13 00:01:31.949533 | orchestrator | 00:01:31.949 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-13 00:01:31.949560 | orchestrator | 00:01:31.949 STDOUT terraform:  + device = (known after apply) 2025-09-13 00:01:31.949588 | orchestrator | 00:01:31.949 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.949614 | orchestrator | 00:01:31.949 STDOUT terraform:  + instance_id = (known after apply) 2025-09-13 00:01:31.949641 | orchestrator | 00:01:31.949 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.949668 | orchestrator | 00:01:31.949 STDOUT terraform:  + volume_id = (known after apply) 2025-09-13 00:01:31.949690 | orchestrator | 00:01:31.949 STDOUT terraform:  } 2025-09-13 00:01:31.949753 | orchestrator | 00:01:31.949 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-09-13 00:01:31.949800 | orchestrator | 00:01:31.949 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-13 00:01:31.949827 | orchestrator | 00:01:31.949 STDOUT terraform:  + device = (known after apply) 2025-09-13 00:01:31.949855 | orchestrator | 00:01:31.949 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.949882 | orchestrator | 00:01:31.949 STDOUT terraform:  + instance_id = (known after apply) 2025-09-13 00:01:31.949910 | orchestrator | 00:01:31.949 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.949937 | orchestrator | 00:01:31.949 STDOUT terraform:  + volume_id = (known after apply) 2025-09-13 00:01:31.949944 | orchestrator | 00:01:31.949 STDOUT terraform:  } 2025-09-13 00:01:31.949995 | orchestrator | 00:01:31.949 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-09-13 00:01:31.950059 | orchestrator | 00:01:31.949 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-13 00:01:31.950087 | orchestrator | 00:01:31.950 STDOUT terraform:  + device = (known after apply) 2025-09-13 00:01:31.950115 | orchestrator | 00:01:31.950 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.950141 | orchestrator | 00:01:31.950 STDOUT terraform:  + instance_id = (known after apply) 2025-09-13 00:01:31.950168 | orchestrator | 00:01:31.950 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.950196 | orchestrator | 00:01:31.950 STDOUT terraform:  + volume_id = (known after apply) 2025-09-13 00:01:31.950212 | orchestrator | 00:01:31.950 STDOUT terraform:  } 2025-09-13 00:01:31.950262 | orchestrator | 00:01:31.950 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-09-13 00:01:31.950309 | orchestrator | 00:01:31.950 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-13 00:01:31.950336 | orchestrator | 00:01:31.950 STDOUT terraform:  + device = (known after apply) 2025-09-13 00:01:31.950363 | orchestrator | 00:01:31.950 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.950392 | orchestrator | 00:01:31.950 STDOUT terraform:  + instance_id = (known after apply) 2025-09-13 00:01:31.950419 | orchestrator | 00:01:31.950 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.950445 | orchestrator | 00:01:31.950 STDOUT terraform:  + volume_id = (known after apply) 2025-09-13 00:01:31.950459 | orchestrator | 00:01:31.950 STDOUT terraform:  } 2025-09-13 00:01:31.950508 | orchestrator | 00:01:31.950 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-09-13 00:01:31.950555 | orchestrator | 00:01:31.950 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-13 00:01:31.950582 | orchestrator | 00:01:31.950 STDOUT terraform:  + device = (known after apply) 2025-09-13 00:01:31.950610 | orchestrator | 00:01:31.950 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.950636 | orchestrator | 00:01:31.950 STDOUT terraform:  + instance_id = (known after apply) 2025-09-13 00:01:31.950664 | orchestrator | 00:01:31.950 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.950692 | orchestrator | 00:01:31.950 STDOUT terraform:  + volume_id = (known after apply) 2025-09-13 00:01:31.950724 | orchestrator | 00:01:31.950 STDOUT terraform:  } 2025-09-13 00:01:31.950773 | orchestrator | 00:01:31.950 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-09-13 00:01:31.950820 | orchestrator | 00:01:31.950 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-13 00:01:31.950847 | orchestrator | 00:01:31.950 STDOUT terraform:  + device = (known after apply) 2025-09-13 00:01:31.950875 | orchestrator | 00:01:31.950 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.950902 | orchestrator | 00:01:31.950 STDOUT terraform:  + instance_id = (known after apply) 2025-09-13 00:01:31.950929 | orchestrator | 00:01:31.950 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.950955 | orchestrator | 00:01:31.950 STDOUT terraform:  + volume_id = (known after apply) 2025-09-13 00:01:31.950962 | orchestrator | 00:01:31.950 STDOUT terraform:  } 2025-09-13 00:01:31.951012 | orchestrator | 00:01:31.950 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-09-13 00:01:31.951060 | orchestrator | 00:01:31.951 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-13 00:01:31.951087 | orchestrator | 00:01:31.951 STDOUT terraform:  + device = (known after apply) 2025-09-13 00:01:31.951115 | orchestrator | 00:01:31.951 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.951142 | orchestrator | 00:01:31.951 STDOUT terraform:  + instance_id = (known after apply) 2025-09-13 00:01:31.951168 | orchestrator | 00:01:31.951 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.951196 | orchestrator | 00:01:31.951 STDOUT terraform:  + volume_id = (known after apply) 2025-09-13 00:01:31.951210 | orchestrator | 00:01:31.951 STDOUT terraform:  } 2025-09-13 00:01:31.951258 | orchestrator | 00:01:31.951 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-09-13 00:01:31.951305 | orchestrator | 00:01:31.951 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-13 00:01:31.951332 | orchestrator | 00:01:31.951 STDOUT terraform:  + device = (known after apply) 2025-09-13 00:01:31.951359 | orchestrator | 00:01:31.951 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.951387 | orchestrator | 00:01:31.951 STDOUT terraform:  + instance_id = (known after apply) 2025-09-13 00:01:31.951414 | orchestrator | 00:01:31.951 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.951441 | orchestrator | 00:01:31.951 STDOUT terraform:  + volume_id = (known after apply) 2025-09-13 00:01:31.951447 | orchestrator | 00:01:31.951 STDOUT terraform:  } 2025-09-13 00:01:31.951513 | orchestrator | 00:01:31.951 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-09-13 00:01:31.951563 | orchestrator | 00:01:31.951 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-13 00:01:31.951596 | orchestrator | 00:01:31.951 STDOUT terraform:  + device = (known after apply) 2025-09-13 00:01:31.951618 | orchestrator | 00:01:31.951 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.951645 | orchestrator | 00:01:31.951 STDOUT terraform:  + instance_id = (known after apply) 2025-09-13 00:01:31.951673 | orchestrator | 00:01:31.951 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.951734 | orchestrator | 00:01:31.951 STDOUT terraform:  + volume_id = (known after apply) 2025-09-13 00:01:31.951740 | orchestrator | 00:01:31.951 STDOUT terraform:  } 2025-09-13 00:01:31.951782 | orchestrator | 00:01:31.951 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-09-13 00:01:31.951832 | orchestrator | 00:01:31.951 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-09-13 00:01:31.951859 | orchestrator | 00:01:31.951 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-13 00:01:31.951885 | orchestrator | 00:01:31.951 STDOUT terraform:  + floating_ip = (known after apply) 2025-09-13 00:01:31.951913 | orchestrator | 00:01:31.951 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.951939 | orchestrator | 00:01:31.951 STDOUT terraform:  + port_id = (known after apply) 2025-09-13 00:01:31.951967 | orchestrator | 00:01:31.951 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.951974 | orchestrator | 00:01:31.951 STDOUT terraform:  } 2025-09-13 00:01:31.952022 | orchestrator | 00:01:31.951 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-09-13 00:01:31.952069 | orchestrator | 00:01:31.952 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-09-13 00:01:31.952092 | orchestrator | 00:01:31.952 STDOUT terraform:  + address = (known after apply) 2025-09-13 00:01:31.952116 | orchestrator | 00:01:31.952 STDOUT terraform:  + all_tags = (known after apply) 2025-09-13 00:01:31.952140 | orchestrator | 00:01:31.952 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-13 00:01:31.952164 | orchestrator | 00:01:31.952 STDOUT terraform:  + dns_name = (known after apply) 2025-09-13 00:01:31.952188 | orchestrator | 00:01:31.952 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-13 00:01:31.952213 | orchestrator | 00:01:31.952 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.952232 | orchestrator | 00:01:31.952 STDOUT terraform:  + pool = "public" 2025-09-13 00:01:31.952257 | orchestrator | 00:01:31.952 STDOUT terraform:  + port_id = (known after apply) 2025-09-13 00:01:31.952280 | orchestrator | 00:01:31.952 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.952306 | orchestrator | 00:01:31.952 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-13 00:01:31.952330 | orchestrator | 00:01:31.952 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-13 00:01:31.952336 | orchestrator | 00:01:31.952 STDOUT terraform:  } 2025-09-13 00:01:31.952382 | orchestrator | 00:01:31.952 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-09-13 00:01:31.952425 | orchestrator | 00:01:31.952 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-09-13 00:01:31.952462 | orchestrator | 00:01:31.952 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-13 00:01:31.952497 | orchestrator | 00:01:31.952 STDOUT terraform:  + all_tags = (known after apply) 2025-09-13 00:01:31.952520 | orchestrator | 00:01:31.952 STDOUT terraform:  + availability_zone_hints = [ 2025-09-13 00:01:31.952534 | orchestrator | 00:01:31.952 STDOUT terraform:  + "nova", 2025-09-13 00:01:31.952547 | orchestrator | 00:01:31.952 STDOUT terraform:  ] 2025-09-13 00:01:31.952583 | orchestrator | 00:01:31.952 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-13 00:01:31.952621 | orchestrator | 00:01:31.952 STDOUT terraform:  + external = (known after apply) 2025-09-13 00:01:31.952657 | orchestrator | 00:01:31.952 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.952711 | orchestrator | 00:01:31.952 STDOUT terraform:  + mtu = (known after apply) 2025-09-13 00:01:31.952744 | orchestrator | 00:01:31.952 STDOUT terraform:  + name = "net-testbed-management" 2025-09-13 00:01:31.952778 | orchestrator | 00:01:31.952 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-13 00:01:31.952812 | orchestrator | 00:01:31.952 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-13 00:01:31.952848 | orchestrator | 00:01:31.952 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.952887 | orchestrator | 00:01:31.952 STDOUT terraform:  + shared = (known after apply) 2025-09-13 00:01:31.952918 | orchestrator | 00:01:31.952 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-13 00:01:31.952953 | orchestrator | 00:01:31.952 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-09-13 00:01:31.952983 | orchestrator | 00:01:31.952 STDOUT terraform:  + segments (known after apply) 2025-09-13 00:01:31.952990 | orchestrator | 00:01:31.952 STDOUT terraform:  } 2025-09-13 00:01:31.953037 | orchestrator | 00:01:31.952 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-09-13 00:01:31.953082 | orchestrator | 00:01:31.953 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-09-13 00:01:31.953118 | orchestrator | 00:01:31.953 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-13 00:01:31.953152 | orchestrator | 00:01:31.953 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-13 00:01:31.953185 | orchestrator | 00:01:31.953 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-13 00:01:31.953221 | orchestrator | 00:01:31.953 STDOUT terraform:  + all_tags = (known after apply) 2025-09-13 00:01:31.953256 | orchestrator | 00:01:31.953 STDOUT terraform:  + device_id = (known after apply) 2025-09-13 00:01:31.953290 | orchestrator | 00:01:31.953 STDOUT terraform:  + device_owner = (known after apply) 2025-09-13 00:01:31.953325 | orchestrator | 00:01:31.953 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-13 00:01:31.953361 | orchestrator | 00:01:31.953 STDOUT terraform:  + dns_name = (known after apply) 2025-09-13 00:01:31.953397 | orchestrator | 00:01:31.953 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.953433 | orchestrator | 00:01:31.953 STDOUT terraform:  + mac_address = (known after apply) 2025-09-13 00:01:31.953468 | orchestrator | 00:01:31.953 STDOUT terraform:  + network_id = (known after apply) 2025-09-13 00:01:31.953503 | orchestrator | 00:01:31.953 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-13 00:01:31.954116 | orchestrator | 00:01:31.953 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-13 00:01:31.954232 | orchestrator | 00:01:31.953 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.954244 | orchestrator | 00:01:31.953 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-13 00:01:31.954253 | orchestrator | 00:01:31.953 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-13 00:01:31.954261 | orchestrator | 00:01:31.953 STDOUT terraform:  + allowed_address_pairs { 2025-09-13 00:01:31.954269 | orchestrator | 00:01:31.953 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-13 00:01:31.954278 | orchestrator | 00:01:31.953 STDOUT terraform:  } 2025-09-13 00:01:31.954285 | orchestrator | 00:01:31.953 STDOUT terraform:  + allowed_address_pairs { 2025-09-13 00:01:31.954293 | orchestrator | 00:01:31.953 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-13 00:01:31.954301 | orchestrator | 00:01:31.953 STDOUT terraform:  } 2025-09-13 00:01:31.954309 | orchestrator | 00:01:31.953 STDOUT terraform:  + binding (known after apply) 2025-09-13 00:01:31.954317 | orchestrator | 00:01:31.953 STDOUT terraform:  + fixed_ip { 2025-09-13 00:01:31.954326 | orchestrator | 00:01:31.953 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-09-13 00:01:31.954335 | orchestrator | 00:01:31.953 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-13 00:01:31.954343 | orchestrator | 00:01:31.953 STDOUT terraform:  } 2025-09-13 00:01:31.954352 | orchestrator | 00:01:31.953 STDOUT terraform:  } 2025-09-13 00:01:31.954362 | orchestrator | 00:01:31.953 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-09-13 00:01:31.954382 | orchestrator | 00:01:31.954 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-13 00:01:31.954392 | orchestrator | 00:01:31.954 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-13 00:01:31.954401 | orchestrator | 00:01:31.954 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-13 00:01:31.954410 | orchestrator | 00:01:31.954 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-13 00:01:31.954419 | orchestrator | 00:01:31.954 STDOUT terraform:  + all_tags = (known after apply) 2025-09-13 00:01:31.954427 | orchestrator | 00:01:31.954 STDOUT terraform:  + device_id = (known after apply) 2025-09-13 00:01:31.954452 | orchestrator | 00:01:31.954 STDOUT terraform:  + device_owner = (known after apply) 2025-09-13 00:01:31.954461 | orchestrator | 00:01:31.954 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-13 00:01:31.954469 | orchestrator | 00:01:31.954 STDOUT terraform:  + dns_name = (known after apply) 2025-09-13 00:01:31.954482 | orchestrator | 00:01:31.954 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.954520 | orchestrator | 00:01:31.954 STDOUT terraform:  + mac_address = (known after apply) 2025-09-13 00:01:31.954531 | orchestrator | 00:01:31.954 STDOUT terraform:  + network_id = (known after apply) 2025-09-13 00:01:31.954633 | orchestrator | 00:01:31.954 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-13 00:01:31.954686 | orchestrator | 00:01:31.954 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-13 00:01:31.954723 | orchestrator | 00:01:31.954 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.954780 | orchestrator | 00:01:31.954 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-13 00:01:31.954796 | orchestrator | 00:01:31.954 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-13 00:01:31.954808 | orchestrator | 00:01:31.954 STDOUT terraform:  + allowed_address_pairs { 2025-09-13 00:01:31.954841 | orchestrator | 00:01:31.954 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-13 00:01:31.954854 | orchestrator | 00:01:31.954 STDOUT terraform:  } 2025-09-13 00:01:31.954865 | orchestrator | 00:01:31.954 STDOUT terraform:  + allowed_address_pairs { 2025-09-13 00:01:31.954913 | orchestrator | 00:01:31.954 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-13 00:01:31.954926 | orchestrator | 00:01:31.954 STDOUT terraform:  } 2025-09-13 00:01:31.954938 | orchestrator | 00:01:31.954 STDOUT terraform:  + allowed_address_pairs { 2025-09-13 00:01:31.954950 | orchestrator | 00:01:31.954 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-13 00:01:31.954962 | orchestrator | 00:01:31.954 STDOUT terraform:  } 2025-09-13 00:01:31.954974 | orchestrator | 00:01:31.954 STDOUT terraform:  + allowed_address_pairs { 2025-09-13 00:01:31.955001 | orchestrator | 00:01:31.954 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-13 00:01:31.955014 | orchestrator | 00:01:31.954 STDOUT terraform:  } 2025-09-13 00:01:31.955026 | orchestrator | 00:01:31.955 STDOUT terraform:  + binding (known after apply) 2025-09-13 00:01:31.955037 | orchestrator | 00:01:31.955 STDOUT terraform:  + fixed_ip { 2025-09-13 00:01:31.955068 | orchestrator | 00:01:31.955 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-09-13 00:01:31.955115 | orchestrator | 00:01:31.955 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-13 00:01:31.955128 | orchestrator | 00:01:31.955 STDOUT terraform:  } 2025-09-13 00:01:31.955137 | orchestrator | 00:01:31.955 STDOUT terraform:  } 2025-09-13 00:01:31.955225 | orchestrator | 00:01:31.955 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-09-13 00:01:31.955275 | orchestrator | 00:01:31.955 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-13 00:01:31.955299 | orchestrator | 00:01:31.955 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-13 00:01:31.955348 | orchestrator | 00:01:31.955 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-13 00:01:31.955364 | orchestrator | 00:01:31.955 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-13 00:01:31.955419 | orchestrator | 00:01:31.955 STDOUT terraform:  + all_tags = (known after apply) 2025-09-13 00:01:31.955470 | orchestrator | 00:01:31.955 STDOUT terraform:  + device_id = (known after apply) 2025-09-13 00:01:31.955486 | orchestrator | 00:01:31.955 STDOUT terraform:  + device_owner = (known after apply) 2025-09-13 00:01:31.955536 | orchestrator | 00:01:31.955 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-13 00:01:31.955563 | orchestrator | 00:01:31.955 STDOUT terraform:  + dns_name = (known after apply) 2025-09-13 00:01:31.955614 | orchestrator | 00:01:31.955 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.955629 | orchestrator | 00:01:31.955 STDOUT terraform:  + mac_address = (known after apply) 2025-09-13 00:01:31.955681 | orchestrator | 00:01:31.955 STDOUT terraform:  + network_id = (known after apply) 2025-09-13 00:01:31.955712 | orchestrator | 00:01:31.955 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-13 00:01:31.955836 | orchestrator | 00:01:31.955 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-13 00:01:31.955850 | orchestrator | 00:01:31.955 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.955862 | orchestrator | 00:01:31.955 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-13 00:01:31.955907 | orchestrator | 00:01:31.955 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-13 00:01:31.955917 | orchestrator | 00:01:31.955 STDOUT terraform:  + allowed_address_pairs { 2025-09-13 00:01:31.955929 | orchestrator | 00:01:31.955 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-13 00:01:31.955941 | orchestrator | 00:01:31.955 STDOUT terraform:  } 2025-09-13 00:01:31.955953 | orchestrator | 00:01:31.955 STDOUT terraform:  + allowed_address_pairs { 2025-09-13 00:01:31.955988 | orchestrator | 00:01:31.955 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-13 00:01:31.955998 | orchestrator | 00:01:31.955 STDOUT terraform:  } 2025-09-13 00:01:31.956010 | orchestrator | 00:01:31.955 STDOUT terraform:  + allowed_address_pairs { 2025-09-13 00:01:31.956022 | orchestrator | 00:01:31.955 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-13 00:01:31.956033 | orchestrator | 00:01:31.956 STDOUT terraform:  } 2025-09-13 00:01:31.956125 | orchestrator | 00:01:31.956 STDOUT terraform:  + allowed_address_pairs { 2025-09-13 00:01:31.956139 | orchestrator | 00:01:31.956 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-13 00:01:31.956148 | orchestrator | 00:01:31.956 STDOUT terraform:  } 2025-09-13 00:01:31.956160 | orchestrator | 00:01:31.956 STDOUT terraform:  + binding (known after apply) 2025-09-13 00:01:31.956169 | orchestrator | 00:01:31.956 STDOUT terraform:  + fixed_ip { 2025-09-13 00:01:31.956191 | orchestrator | 00:01:31.956 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-09-13 00:01:31.956201 | orchestrator | 00:01:31.956 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-13 00:01:31.956212 | orchestrator | 00:01:31.956 STDOUT terraform:  } 2025-09-13 00:01:31.956221 | orchestrator | 00:01:31.956 STDOUT terraform:  } 2025-09-13 00:01:31.956333 | orchestrator | 00:01:31.956 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-09-13 00:01:31.956349 | orchestrator | 00:01:31.956 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-13 00:01:31.956396 | orchestrator | 00:01:31.956 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-13 00:01:31.956441 | orchestrator | 00:01:31.956 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-13 00:01:31.956455 | orchestrator | 00:01:31.956 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-13 00:01:31.956497 | orchestrator | 00:01:31.956 STDOUT terraform:  + all_tags = (known after apply) 2025-09-13 00:01:31.956532 | orchestrator | 00:01:31.956 STDOUT terraform:  + device_id = (known after apply) 2025-09-13 00:01:31.956575 | orchestrator | 00:01:31.956 STDOUT terraform:  + device_owner = (known after apply) 2025-09-13 00:01:31.956588 | orchestrator | 00:01:31.956 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-13 00:01:31.956633 | orchestrator | 00:01:31.956 STDOUT terraform:  + dns_name = (known after apply) 2025-09-13 00:01:31.956664 | orchestrator | 00:01:31.956 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.956715 | orchestrator | 00:01:31.956 STDOUT terraform:  + mac_address = (known after apply) 2025-09-13 00:01:31.956728 | orchestrator | 00:01:31.956 STDOUT terraform:  + network_id = (known after apply) 2025-09-13 00:01:31.956774 | orchestrator | 00:01:31.956 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-13 00:01:31.956806 | orchestrator | 00:01:31.956 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-13 00:01:31.956837 | orchestrator | 00:01:31.956 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.956870 | orchestrator | 00:01:31.956 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-13 00:01:31.956901 | orchestrator | 00:01:31.956 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-13 00:01:31.956913 | orchestrator | 00:01:31.956 STDOUT terraform:  + allowed_address_pairs { 2025-09-13 00:01:31.956951 | orchestrator | 00:01:31.956 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-13 00:01:31.956961 | orchestrator | 00:01:31.956 STDOUT terraform:  } 2025-09-13 00:01:31.956972 | orchestrator | 00:01:31.956 STDOUT terraform:  + allowed_address_pairs { 2025-09-13 00:01:31.957002 | orchestrator | 00:01:31.956 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-13 00:01:31.957011 | orchestrator | 00:01:31.956 STDOUT terraform:  } 2025-09-13 00:01:31.957022 | orchestrator | 00:01:31.956 STDOUT terraform:  + allowed_address_pairs { 2025-09-13 00:01:31.957052 | orchestrator | 00:01:31.957 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-13 00:01:31.957067 | orchestrator | 00:01:31.957 STDOUT terraform:  } 2025-09-13 00:01:31.957078 | orchestrator | 00:01:31.957 STDOUT terraform:  + allowed_address_pairs { 2025-09-13 00:01:31.957088 | orchestrator | 00:01:31.957 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-13 00:01:31.957099 | orchestrator | 00:01:31.957 STDOUT terraform:  } 2025-09-13 00:01:31.957131 | orchestrator | 00:01:31.957 STDOUT terraform:  + binding (known after apply) 2025-09-13 00:01:31.957140 | orchestrator | 00:01:31.957 STDOUT terraform:  + fixed_ip { 2025-09-13 00:01:31.957150 | orchestrator | 00:01:31.957 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-09-13 00:01:31.957190 | orchestrator | 00:01:31.957 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-13 00:01:31.957200 | orchestrator | 00:01:31.957 STDOUT terraform:  } 2025-09-13 00:01:31.957211 | orchestrator | 00:01:31.957 STDOUT terraform:  } 2025-09-13 00:01:31.957257 | orchestrator | 00:01:31.957 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-09-13 00:01:31.957301 | orchestrator | 00:01:31.957 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-13 00:01:31.957332 | orchestrator | 00:01:31.957 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-13 00:01:31.957362 | orchestrator | 00:01:31.957 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-13 00:01:31.957399 | orchestrator | 00:01:31.957 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-13 00:01:31.957436 | orchestrator | 00:01:31.957 STDOUT terraform:  + all_tags = (known after apply) 2025-09-13 00:01:31.957477 | orchestrator | 00:01:31.957 STDOUT terraform:  + device_id = (known after apply) 2025-09-13 00:01:31.957514 | orchestrator | 00:01:31.957 STDOUT terraform:  + device_owner = (known after apply) 2025-09-13 00:01:31.957551 | orchestrator | 00:01:31.957 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-13 00:01:31.957582 | orchestrator | 00:01:31.957 STDOUT terraform:  + dns_name = (known after apply) 2025-09-13 00:01:31.957614 | orchestrator | 00:01:31.957 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.957645 | orchestrator | 00:01:31.957 STDOUT terraform:  + mac_address = (known after apply) 2025-09-13 00:01:31.957674 | orchestrator | 00:01:31.957 STDOUT terraform:  + network_id = (known after apply) 2025-09-13 00:01:31.957720 | orchestrator | 00:01:31.957 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-13 00:01:31.957750 | orchestrator | 00:01:31.957 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-13 00:01:31.957779 | orchestrator | 00:01:31.957 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.957817 | orchestrator | 00:01:31.957 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-13 00:01:31.957847 | orchestrator | 00:01:31.957 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-13 00:01:31.957858 | orchestrator | 00:01:31.957 STDOUT terraform:  + allowed_address_pairs { 2025-09-13 00:01:31.957887 | orchestrator | 00:01:31.957 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-13 00:01:31.957903 | orchestrator | 00:01:31.957 STDOUT terraform:  } 2025-09-13 00:01:31.957913 | orchestrator | 00:01:31.957 STDOUT terraform:  + allowed_address_pairs { 2025-09-13 00:01:31.957949 | orchestrator | 00:01:31.957 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-13 00:01:31.957959 | orchestrator | 00:01:31.957 STDOUT terraform:  } 2025-09-13 00:01:31.957970 | orchestrator | 00:01:31.957 STDOUT terraform:  + allowed_address_pairs { 2025-09-13 00:01:31.957998 | orchestrator | 00:01:31.957 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-13 00:01:31.958008 | orchestrator | 00:01:31.957 STDOUT terraform:  } 2025-09-13 00:01:31.958042 | orchestrator | 00:01:31.957 STDOUT terraform:  + allowed_address_pairs { 2025-09-13 00:01:31.958053 | orchestrator | 00:01:31.958 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-13 00:01:31.958064 | orchestrator | 00:01:31.958 STDOUT terraform:  } 2025-09-13 00:01:31.958092 | orchestrator | 00:01:31.958 STDOUT terraform:  + binding (known after apply) 2025-09-13 00:01:31.958105 | orchestrator | 00:01:31.958 STDOUT terraform:  + fixed_ip { 2025-09-13 00:01:31.958117 | orchestrator | 00:01:31.958 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-09-13 00:01:31.958153 | orchestrator | 00:01:31.958 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-13 00:01:31.958165 | orchestrator | 00:01:31.958 STDOUT terraform:  } 2025-09-13 00:01:31.958175 | orchestrator | 00:01:31.958 STDOUT terraform:  } 2025-09-13 00:01:31.958219 | orchestrator | 00:01:31.958 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-09-13 00:01:31.958264 | orchestrator | 00:01:31.958 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-13 00:01:31.958301 | orchestrator | 00:01:31.958 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-13 00:01:31.958337 | orchestrator | 00:01:31.958 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-13 00:01:31.958372 | orchestrator | 00:01:31.958 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-13 00:01:31.958408 | orchestrator | 00:01:31.958 STDOUT terraform:  + all_tags = (known after apply) 2025-09-13 00:01:31.958444 | orchestrator | 00:01:31.958 STDOUT terraform:  + device_id = (known after apply) 2025-09-13 00:01:31.958480 | orchestrator | 00:01:31.958 STDOUT terraform:  + device_owner = (known after apply) 2025-09-13 00:01:31.958516 | orchestrator | 00:01:31.958 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-13 00:01:31.958552 | orchestrator | 00:01:31.958 STDOUT terraform:  + dns_name = (known after apply) 2025-09-13 00:01:31.958590 | orchestrator | 00:01:31.958 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.958627 | orchestrator | 00:01:31.958 STDOUT terraform:  + mac_address = (known after apply) 2025-09-13 00:01:31.958663 | orchestrator | 00:01:31.958 STDOUT terraform:  + network_id = (known after apply) 2025-09-13 00:01:31.958712 | orchestrator | 00:01:31.958 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-13 00:01:31.958758 | orchestrator | 00:01:31.958 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-13 00:01:31.958793 | orchestrator | 00:01:31.958 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.958829 | orchestrator | 00:01:31.958 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-13 00:01:31.958866 | orchestrator | 00:01:31.958 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-13 00:01:31.958878 | orchestrator | 00:01:31.958 STDOUT terraform:  + allowed_address_pairs { 2025-09-13 00:01:31.958911 | orchestrator | 00:01:31.958 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-13 00:01:31.958923 | orchestrator | 00:01:31.958 STDOUT terraform:  } 2025-09-13 00:01:31.958934 | orchestrator | 00:01:31.958 STDOUT terraform:  + allowed_address_pairs { 2025-09-13 00:01:31.958970 | orchestrator | 00:01:31.958 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-13 00:01:31.958982 | orchestrator | 00:01:31.958 STDOUT terraform:  } 2025-09-13 00:01:31.958993 | orchestrator | 00:01:31.958 STDOUT terraform:  + allowed_address_pairs { 2025-09-13 00:01:31.959021 | orchestrator | 00:01:31.958 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-13 00:01:31.959033 | orchestrator | 00:01:31.959 STDOUT terraform:  } 2025-09-13 00:01:31.959044 | orchestrator | 00:01:31.959 STDOUT terraform:  + allowed_address_pairs { 2025-09-13 00:01:31.959074 | orchestrator | 00:01:31.959 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-13 00:01:31.959086 | orchestrator | 00:01:31.959 STDOUT terraform:  } 2025-09-13 00:01:31.959097 | orchestrator | 00:01:31.959 STDOUT terraform:  + binding (known after apply) 2025-09-13 00:01:31.959108 | orchestrator | 00:01:31.959 STDOUT terraform:  + fixed_ip { 2025-09-13 00:01:31.959141 | orchestrator | 00:01:31.959 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-09-13 00:01:31.959172 | orchestrator | 00:01:31.959 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-13 00:01:31.959184 | orchestrator | 00:01:31.959 STDOUT terraform:  } 2025-09-13 00:01:31.959192 | orchestrator | 00:01:31.959 STDOUT terraform:  } 2025-09-13 00:01:31.959235 | orchestrator | 00:01:31.959 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-09-13 00:01:31.959280 | orchestrator | 00:01:31.959 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-13 00:01:31.959315 | orchestrator | 00:01:31.959 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-13 00:01:31.959351 | orchestrator | 00:01:31.959 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-13 00:01:31.959384 | orchestrator | 00:01:31.959 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-13 00:01:31.959420 | orchestrator | 00:01:31.959 STDOUT terraform:  + all_tags = (known after apply) 2025-09-13 00:01:31.959458 | orchestrator | 00:01:31.959 STDOUT terraform:  + device_id = (known after apply) 2025-09-13 00:01:31.959493 | orchestrator | 00:01:31.959 STDOUT terraform:  + device_owner = (known after apply) 2025-09-13 00:01:31.959528 | orchestrator | 00:01:31.959 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-13 00:01:31.959565 | orchestrator | 00:01:31.959 STDOUT terraform:  + dns_name = (known after apply) 2025-09-13 00:01:31.959601 | orchestrator | 00:01:31.959 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.959636 | orchestrator | 00:01:31.959 STDOUT terraform:  + mac_address = (known after apply) 2025-09-13 00:01:31.959671 | orchestrator | 00:01:31.959 STDOUT terraform:  + network_id = (known after apply) 2025-09-13 00:01:31.959716 | orchestrator | 00:01:31.959 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-13 00:01:31.959752 | orchestrator | 00:01:31.959 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-13 00:01:31.959788 | orchestrator | 00:01:31.959 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.959824 | orchestrator | 00:01:31.959 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-13 00:01:31.959859 | orchestrator | 00:01:31.959 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-13 00:01:31.959871 | orchestrator | 00:01:31.959 STDOUT terraform:  + allowed_address_pairs { 2025-09-13 00:01:31.959903 | orchestrator | 00:01:31.959 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-13 00:01:31.959915 | orchestrator | 00:01:31.959 STDOUT terraform:  } 2025-09-13 00:01:31.959925 | orchestrator | 00:01:31.959 STDOUT terraform:  + allowed_address_pairs { 2025-09-13 00:01:31.959956 | orchestrator | 00:01:31.959 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-13 00:01:31.959968 | orchestrator | 00:01:31.959 STDOUT terraform:  } 2025-09-13 00:01:31.959979 | orchestrator | 00:01:31.959 STDOUT terraform:  + allowed_address_pairs { 2025-09-13 00:01:31.960010 | orchestrator | 00:01:31.959 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-13 00:01:31.960022 | orchestrator | 00:01:31.960 STDOUT terraform:  } 2025-09-13 00:01:31.960033 | orchestrator | 00:01:31.960 STDOUT terraform:  + allowed_address_pairs { 2025-09-13 00:01:31.960063 | orchestrator | 00:01:31.960 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-13 00:01:31.960075 | orchestrator | 00:01:31.960 STDOUT terraform:  } 2025-09-13 00:01:31.960086 | orchestrator | 00:01:31.960 STDOUT terraform:  + binding (known after apply) 2025-09-13 00:01:31.960096 | orchestrator | 00:01:31.960 STDOUT terraform:  + fixed_ip { 2025-09-13 00:01:31.960125 | orchestrator | 00:01:31.960 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-09-13 00:01:31.960153 | orchestrator | 00:01:31.960 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-13 00:01:31.960166 | orchestrator | 00:01:31.960 STDOUT terraform:  } 2025-09-13 00:01:31.960174 | orchestrator | 00:01:31.960 STDOUT terraform:  } 2025-09-13 00:01:31.960218 | orchestrator | 00:01:31.960 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-09-13 00:01:31.960267 | orchestrator | 00:01:31.960 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-09-13 00:01:31.960299 | orchestrator | 00:01:31.960 STDOUT terraform:  + force_destroy = false 2025-09-13 00:01:31.960333 | orchestrator | 00:01:31.960 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.960366 | orchestrator | 00:01:31.960 STDOUT terraform:  + port_id = (known after apply) 2025-09-13 00:01:31.960384 | orchestrator | 00:01:31.960 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.960418 | orchestrator | 00:01:31.960 STDOUT terraform:  + router_id = (known after apply) 2025-09-13 00:01:31.960447 | orchestrator | 00:01:31.960 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-13 00:01:31.960459 | orchestrator | 00:01:31.960 STDOUT terraform:  } 2025-09-13 00:01:31.960492 | orchestrator | 00:01:31.960 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-09-13 00:01:31.960528 | orchestrator | 00:01:31.960 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-09-13 00:01:31.960563 | orchestrator | 00:01:31.960 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-13 00:01:31.960601 | orchestrator | 00:01:31.960 STDOUT terraform:  + all_tags = (known after apply) 2025-09-13 00:01:31.960629 | orchestrator | 00:01:31.960 STDOUT terraform:  + availability_zone_hints = [ 2025-09-13 00:01:31.960644 | orchestrator | 00:01:31.960 STDOUT terraform:  + "nova", 2025-09-13 00:01:31.960653 | orchestrator | 00:01:31.960 STDOUT terraform:  ] 2025-09-13 00:01:31.960683 | orchestrator | 00:01:31.960 STDOUT terraform:  + distributed = (known after apply) 2025-09-13 00:01:31.960859 | orchestrator | 00:01:31.960 STDOUT terraform:  + enable_snat = (known after apply) 2025-09-13 00:01:31.960928 | orchestrator | 00:01:31.960 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-09-13 00:01:31.960960 | orchestrator | 00:01:31.960 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-09-13 00:01:31.960982 | orchestrator | 00:01:31.960 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.960993 | orchestrator | 00:01:31.960 STDOUT terraform:  + name = "testbed" 2025-09-13 00:01:31.961003 | orchestrator | 00:01:31.960 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.961013 | orchestrator | 00:01:31.960 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-13 00:01:31.961022 | orchestrator | 00:01:31.960 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-09-13 00:01:31.961032 | orchestrator | 00:01:31.960 STDOUT terraform:  } 2025-09-13 00:01:31.961047 | orchestrator | 00:01:31.960 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-09-13 00:01:31.961080 | orchestrator | 00:01:31.961 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-09-13 00:01:31.961095 | orchestrator | 00:01:31.961 STDOUT terraform:  + description = "ssh" 2025-09-13 00:01:31.961120 | orchestrator | 00:01:31.961 STDOUT terraform:  + direction = "ingress" 2025-09-13 00:01:31.961142 | orchestrator | 00:01:31.961 STDOUT terraform:  + ethertype = "IPv4" 2025-09-13 00:01:31.961186 | orchestrator | 00:01:31.961 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.961202 | orchestrator | 00:01:31.961 STDOUT terraform:  + port_range_max = 22 2025-09-13 00:01:31.961215 | orchestrator | 00:01:31.961 STDOUT terraform:  + port_range_min = 22 2025-09-13 00:01:31.961244 | orchestrator | 00:01:31.961 STDOUT terraform:  + protocol = "tcp" 2025-09-13 00:01:31.961280 | orchestrator | 00:01:31.961 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.961315 | orchestrator | 00:01:31.961 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-13 00:01:31.961351 | orchestrator | 00:01:31.961 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-13 00:01:31.961385 | orchestrator | 00:01:31.961 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-13 00:01:31.961418 | orchestrator | 00:01:31.961 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-13 00:01:31.961453 | orchestrator | 00:01:31.961 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-13 00:01:31.961468 | orchestrator | 00:01:31.961 STDOUT terraform:  } 2025-09-13 00:01:31.961515 | orchestrator | 00:01:31.961 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-09-13 00:01:31.961568 | orchestrator | 00:01:31.961 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-09-13 00:01:31.961602 | orchestrator | 00:01:31.961 STDOUT terraform:  + description = "wireguard" 2025-09-13 00:01:31.961616 | orchestrator | 00:01:31.961 STDOUT terraform:  + direction = "ingress" 2025-09-13 00:01:31.961649 | orchestrator | 00:01:31.961 STDOUT terraform:  + ethertype = "IPv4" 2025-09-13 00:01:31.961686 | orchestrator | 00:01:31.961 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.961786 | orchestrator | 00:01:31.961 STDOUT terraform:  + port_range_max = 51820 2025-09-13 00:01:31.961806 | orchestrator | 00:01:31.961 STDOUT terraform:  + port_range_min = 51820 2025-09-13 00:01:31.961822 | orchestrator | 00:01:31.961 STDOUT terraform:  + protocol = "udp" 2025-09-13 00:01:31.961839 | orchestrator | 00:01:31.961 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.961856 | orchestrator | 00:01:31.961 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-13 00:01:31.961866 | orchestrator | 00:01:31.961 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-13 00:01:31.961879 | orchestrator | 00:01:31.961 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-13 00:01:31.961921 | orchestrator | 00:01:31.961 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-13 00:01:31.961936 | orchestrator | 00:01:31.961 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-13 00:01:31.961949 | orchestrator | 00:01:31.961 STDOUT terraform:  } 2025-09-13 00:01:31.962002 | orchestrator | 00:01:31.961 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-09-13 00:01:31.962175 | orchestrator | 00:01:31.961 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-09-13 00:01:31.962336 | orchestrator | 00:01:31.962 STDOUT terraform:  + direction = "ingress" 2025-09-13 00:01:31.962347 | orchestrator | 00:01:31.962 STDOUT terraform:  + ethertype = "IPv4" 2025-09-13 00:01:31.962375 | orchestrator | 00:01:31.962 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.962386 | orchestrator | 00:01:31.962 STDOUT terraform:  + protocol = "tcp" 2025-09-13 00:01:31.962395 | orchestrator | 00:01:31.962 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.962405 | orchestrator | 00:01:31.962 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-13 00:01:31.962415 | orchestrator | 00:01:31.962 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-13 00:01:31.962424 | orchestrator | 00:01:31.962 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-13 00:01:31.962434 | orchestrator | 00:01:31.962 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-13 00:01:31.962447 | orchestrator | 00:01:31.962 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-13 00:01:31.962457 | orchestrator | 00:01:31.962 STDOUT terraform:  } 2025-09-13 00:01:31.962467 | orchestrator | 00:01:31.962 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-09-13 00:01:31.962664 | orchestrator | 00:01:31.962 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-09-13 00:01:31.962679 | orchestrator | 00:01:31.962 STDOUT terraform:  + direction = "ingress" 2025-09-13 00:01:31.962718 | orchestrator | 00:01:31.962 STDOUT terraform:  + ethertype = "IPv4" 2025-09-13 00:01:31.962736 | orchestrator | 00:01:31.962 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.962753 | orchestrator | 00:01:31.962 STDOUT terraform:  + protocol = "udp" 2025-09-13 00:01:31.962770 | orchestrator | 00:01:31.962 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.962787 | orchestrator | 00:01:31.962 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-13 00:01:31.962800 | orchestrator | 00:01:31.962 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-13 00:01:31.963914 | orchestrator | 00:01:31.962 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-13 00:01:31.963944 | orchestrator | 00:01:31.962 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-13 00:01:31.963954 | orchestrator | 00:01:31.962 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-13 00:01:31.963964 | orchestrator | 00:01:31.962 STDOUT terraform:  } 2025-09-13 00:01:31.963975 | orchestrator | 00:01:31.962 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-09-13 00:01:31.963985 | orchestrator | 00:01:31.962 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-09-13 00:01:31.963995 | orchestrator | 00:01:31.963 STDOUT terraform:  + direction = "ingress" 2025-09-13 00:01:31.964004 | orchestrator | 00:01:31.963 STDOUT terraform:  + ethertype = "IPv4" 2025-09-13 00:01:31.964014 | orchestrator | 00:01:31.963 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.964024 | orchestrator | 00:01:31.963 STDOUT terraform:  + protocol = "icmp" 2025-09-13 00:01:31.964053 | orchestrator | 00:01:31.963 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.964064 | orchestrator | 00:01:31.963 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-13 00:01:31.964074 | orchestrator | 00:01:31.963 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-13 00:01:31.964084 | orchestrator | 00:01:31.963 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-13 00:01:31.964094 | orchestrator | 00:01:31.963 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-13 00:01:31.964103 | orchestrator | 00:01:31.963 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-13 00:01:31.964113 | orchestrator | 00:01:31.963 STDOUT terraform:  } 2025-09-13 00:01:31.964123 | orchestrator | 00:01:31.963 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-09-13 00:01:31.964133 | orchestrator | 00:01:31.963 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-09-13 00:01:31.964143 | orchestrator | 00:01:31.963 STDOUT terraform:  + direction = "ingress" 2025-09-13 00:01:31.964153 | orchestrator | 00:01:31.963 STDOUT terraform:  + ethertype = "IPv4" 2025-09-13 00:01:31.964162 | orchestrator | 00:01:31.963 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.964172 | orchestrator | 00:01:31.963 STDOUT terraform:  + protocol = "tcp" 2025-09-13 00:01:31.964181 | orchestrator | 00:01:31.963 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.964191 | orchestrator | 00:01:31.963 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-13 00:01:31.964200 | orchestrator | 00:01:31.963 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-13 00:01:31.964210 | orchestrator | 00:01:31.963 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-13 00:01:31.964220 | orchestrator | 00:01:31.963 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-13 00:01:31.964230 | orchestrator | 00:01:31.963 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-13 00:01:31.964239 | orchestrator | 00:01:31.963 STDOUT terraform:  } 2025-09-13 00:01:31.964255 | orchestrator | 00:01:31.963 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-09-13 00:01:31.964265 | orchestrator | 00:01:31.963 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-09-13 00:01:31.964275 | orchestrator | 00:01:31.963 STDOUT terraform:  + direction = "ingress" 2025-09-13 00:01:31.964284 | orchestrator | 00:01:31.963 STDOUT terraform:  + ethertype = "IPv4" 2025-09-13 00:01:31.964294 | orchestrator | 00:01:31.963 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.964304 | orchestrator | 00:01:31.963 STDOUT terraform:  + protocol = "udp" 2025-09-13 00:01:31.964314 | orchestrator | 00:01:31.963 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.964323 | orchestrator | 00:01:31.964 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-13 00:01:31.964333 | orchestrator | 00:01:31.964 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-13 00:01:31.964348 | orchestrator | 00:01:31.964 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-13 00:01:31.964358 | orchestrator | 00:01:31.964 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-13 00:01:31.964368 | orchestrator | 00:01:31.964 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-13 00:01:31.964378 | orchestrator | 00:01:31.964 STDOUT terraform:  } 2025-09-13 00:01:31.964387 | orchestrator | 00:01:31.964 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-09-13 00:01:31.964401 | orchestrator | 00:01:31.964 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-09-13 00:01:31.964411 | orchestrator | 00:01:31.964 STDOUT terraform:  + direction = "ingress" 2025-09-13 00:01:31.964420 | orchestrator | 00:01:31.964 STDOUT terraform:  + ethertype = "IPv4" 2025-09-13 00:01:31.964430 | orchestrator | 00:01:31.964 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.964440 | orchestrator | 00:01:31.964 STDOUT terraform:  + protocol = "icmp" 2025-09-13 00:01:31.964453 | orchestrator | 00:01:31.964 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.964567 | orchestrator | 00:01:31.964 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-13 00:01:31.964817 | orchestrator | 00:01:31.964 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-13 00:01:31.964829 | orchestrator | 00:01:31.964 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-13 00:01:31.964844 | orchestrator | 00:01:31.964 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-13 00:01:31.964854 | orchestrator | 00:01:31.964 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-13 00:01:31.964863 | orchestrator | 00:01:31.964 STDOUT terraform:  } 2025-09-13 00:01:31.964873 | orchestrator | 00:01:31.964 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-09-13 00:01:31.964883 | orchestrator | 00:01:31.964 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-09-13 00:01:31.964894 | orchestrator | 00:01:31.964 STDOUT terraform:  + description = "vrrp" 2025-09-13 00:01:31.964903 | orchestrator | 00:01:31.964 STDOUT terraform:  + direction = "ingress" 2025-09-13 00:01:31.964913 | orchestrator | 00:01:31.964 STDOUT terraform:  + ethertype = "IPv4" 2025-09-13 00:01:31.964960 | orchestrator | 00:01:31.964 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.964969 | orchestrator | 00:01:31.964 STDOUT terraform:  + protocol = "112" 2025-09-13 00:01:31.964977 | orchestrator | 00:01:31.964 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.964987 | orchestrator | 00:01:31.964 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-13 00:01:31.964996 | orchestrator | 00:01:31.964 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-13 00:01:31.965007 | orchestrator | 00:01:31.964 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-13 00:01:31.965055 | orchestrator | 00:01:31.965 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-13 00:01:31.965098 | orchestrator | 00:01:31.965 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-13 00:01:31.965225 | orchestrator | 00:01:31.965 STDOUT terraform:  } 2025-09-13 00:01:31.965238 | orchestrator | 00:01:31.965 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-09-13 00:01:31.965293 | orchestrator | 00:01:31.965 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-09-13 00:01:31.965303 | orchestrator | 00:01:31.965 STDOUT terraform:  + all_tags = (known after apply) 2025-09-13 00:01:31.965375 | orchestrator | 00:01:31.965 STDOUT terraform:  + description = "management security group" 2025-09-13 00:01:31.965386 | orchestrator | 00:01:31.965 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.965395 | orchestrator | 00:01:31.965 STDOUT terraform:  + name = "testbed-management" 2025-09-13 00:01:31.965403 | orchestrator | 00:01:31.965 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.965411 | orchestrator | 00:01:31.965 STDOUT terraform:  + stateful = (known after apply) 2025-09-13 00:01:31.965419 | orchestrator | 00:01:31.965 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-13 00:01:31.965427 | orchestrator | 00:01:31.965 STDOUT terraform:  } 2025-09-13 00:01:31.965437 | orchestrator | 00:01:31.965 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-09-13 00:01:31.965536 | orchestrator | 00:01:31.965 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-09-13 00:01:31.965553 | orchestrator | 00:01:31.965 STDOUT terraform:  + all_tags = (known after apply) 2025-09-13 00:01:31.965561 | orchestrator | 00:01:31.965 STDOUT terraform:  + description = "node security group" 2025-09-13 00:01:31.965572 | orchestrator | 00:01:31.965 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.965581 | orchestrator | 00:01:31.965 STDOUT terraform:  + name = "testbed-node" 2025-09-13 00:01:31.965591 | orchestrator | 00:01:31.965 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.965629 | orchestrator | 00:01:31.965 STDOUT terraform:  + stateful = (known after apply) 2025-09-13 00:01:31.965642 | orchestrator | 00:01:31.965 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-13 00:01:31.965652 | orchestrator | 00:01:31.965 STDOUT terraform:  } 2025-09-13 00:01:31.965797 | orchestrator | 00:01:31.965 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-09-13 00:01:31.965810 | orchestrator | 00:01:31.965 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-09-13 00:01:31.965818 | orchestrator | 00:01:31.965 STDOUT terraform:  + all_tags = (known after apply) 2025-09-13 00:01:31.965829 | orchestrator | 00:01:31.965 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-09-13 00:01:31.965839 | orchestrator | 00:01:31.965 STDOUT terraform:  + dns_nameservers = [ 2025-09-13 00:01:31.965874 | orchestrator | 00:01:31.965 STDOUT terraform:  + "8.8.8.8", 2025-09-13 00:01:31.966003 | orchestrator | 00:01:31.965 STDOUT terraform:  + "9.9.9.9", 2025-09-13 00:01:31.966045 | orchestrator | 00:01:31.965 STDOUT terraform:  ] 2025-09-13 00:01:31.966057 | orchestrator | 00:01:31.965 STDOUT terraform:  + enable_dhcp = true 2025-09-13 00:01:31.966065 | orchestrator | 00:01:31.965 STDOUT terraform:  + gateway_ip = (known after apply) 2025-09-13 00:01:31.966073 | orchestrator | 00:01:31.965 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.966081 | orchestrator | 00:01:31.965 STDOUT terraform:  + ip_version = 4 2025-09-13 00:01:31.966089 | orchestrator | 00:01:31.965 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-09-13 00:01:31.966097 | orchestrator | 00:01:31.965 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-09-13 00:01:31.966140 | orchestrator | 00:01:31.966 STDOUT terraform:  + name = "subnet-testbed-management" 2025-09-13 00:01:31.966158 | orchestrator | 00:01:31.966 STDOUT terraform:  + network_id = (known after apply) 2025-09-13 00:01:31.966183 | orchestrator | 00:01:31.966 STDOUT terraform:  + no_gateway = false 2025-09-13 00:01:31.966216 | orchestrator | 00:01:31.966 STDOUT terraform:  + region = (known after apply) 2025-09-13 00:01:31.966320 | orchestrator | 00:01:31.966 STDOUT terraform:  + service_types = (known after apply) 2025-09-13 00:01:31.966330 | orchestrator | 00:01:31.966 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-13 00:01:31.966338 | orchestrator | 00:01:31.966 STDOUT terraform:  + allocation_pool { 2025-09-13 00:01:31.966346 | orchestrator | 00:01:31.966 STDOUT terraform:  + end = "192.168.31.250" 2025-09-13 00:01:31.966354 | orchestrator | 00:01:31.966 STDOUT terraform:  + start = "192.168.31.200" 2025-09-13 00:01:31.966362 | orchestrator | 00:01:31.966 STDOUT terraform:  } 2025-09-13 00:01:31.966373 | orchestrator | 00:01:31.966 STDOUT terraform:  } 2025-09-13 00:01:31.966381 | orchestrator | 00:01:31.966 STDOUT terraform:  # terraform_data.image will be created 2025-09-13 00:01:31.966392 | orchestrator | 00:01:31.966 STDOUT terraform:  + resource "terraform_data" "image" { 2025-09-13 00:01:31.966402 | orchestrator | 00:01:31.966 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.966413 | orchestrator | 00:01:31.966 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-13 00:01:31.966446 | orchestrator | 00:01:31.966 STDOUT terraform:  + output = (known after apply) 2025-09-13 00:01:31.966463 | orchestrator | 00:01:31.966 STDOUT terraform:  } 2025-09-13 00:01:31.966474 | orchestrator | 00:01:31.966 STDOUT terraform:  # terraform_data.image_node will be created 2025-09-13 00:01:31.966560 | orchestrator | 00:01:31.966 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-09-13 00:01:31.966577 | orchestrator | 00:01:31.966 STDOUT terraform:  + id = (known after apply) 2025-09-13 00:01:31.966590 | orchestrator | 00:01:31.966 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-13 00:01:31.966603 | orchestrator | 00:01:31.966 STDOUT terraform:  + output = (known after apply) 2025-09-13 00:01:31.966620 | orchestrator | 00:01:31.966 STDOUT terraform:  } 2025-09-13 00:01:31.966633 | orchestrator | 00:01:31.966 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-09-13 00:01:31.966647 | orchestrator | 00:01:31.966 STDOUT terraform: Changes to Outputs: 2025-09-13 00:01:31.966670 | orchestrator | 00:01:31.966 STDOUT terraform:  + manager_address = (sensitive value) 2025-09-13 00:01:31.966686 | orchestrator | 00:01:31.966 STDOUT terraform:  + private_key = (sensitive value) 2025-09-13 00:01:32.194595 | orchestrator | 00:01:32.192 STDOUT terraform: terraform_data.image: Creating... 2025-09-13 00:01:32.194659 | orchestrator | 00:01:32.193 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=1f7f0369-7585-a698-1a05-18ec7e76f772] 2025-09-13 00:01:32.194668 | orchestrator | 00:01:32.193 STDOUT terraform: terraform_data.image_node: Creating... 2025-09-13 00:01:32.194676 | orchestrator | 00:01:32.194 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=640166f3-4cb9-a38f-40f5-39fcc425ac59] 2025-09-13 00:01:32.215136 | orchestrator | 00:01:32.214 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-09-13 00:01:32.225372 | orchestrator | 00:01:32.220 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-09-13 00:01:32.225444 | orchestrator | 00:01:32.221 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-09-13 00:01:32.227976 | orchestrator | 00:01:32.227 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-09-13 00:01:32.230991 | orchestrator | 00:01:32.230 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-09-13 00:01:32.233770 | orchestrator | 00:01:32.233 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-09-13 00:01:32.234473 | orchestrator | 00:01:32.234 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-09-13 00:01:32.237504 | orchestrator | 00:01:32.237 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-09-13 00:01:32.237539 | orchestrator | 00:01:32.237 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-09-13 00:01:32.237594 | orchestrator | 00:01:32.237 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-09-13 00:01:32.714968 | orchestrator | 00:01:32.714 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-13 00:01:32.721650 | orchestrator | 00:01:32.721 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-09-13 00:01:32.743589 | orchestrator | 00:01:32.743 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-09-13 00:01:32.751932 | orchestrator | 00:01:32.751 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-09-13 00:01:33.280129 | orchestrator | 00:01:33.279 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 0s [id=274b847e-0231-4d04-b7ae-1e05bcf9bbe3] 2025-09-13 00:01:33.283002 | orchestrator | 00:01:33.282 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-09-13 00:01:33.336429 | orchestrator | 00:01:33.336 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-13 00:01:33.344686 | orchestrator | 00:01:33.343 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-09-13 00:01:35.854139 | orchestrator | 00:01:35.853 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=1763dbba-d504-4b6d-865a-93cad2d65fc8] 2025-09-13 00:01:35.869643 | orchestrator | 00:01:35.869 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=c5da3e8c-99b7-4761-a17c-7637f0eb6556] 2025-09-13 00:01:35.871077 | orchestrator | 00:01:35.870 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-09-13 00:01:35.875040 | orchestrator | 00:01:35.874 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=b696788a047812ad4027371d49ad34f7b67a33c5] 2025-09-13 00:01:35.882357 | orchestrator | 00:01:35.882 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-09-13 00:01:35.883185 | orchestrator | 00:01:35.883 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-09-13 00:01:35.888236 | orchestrator | 00:01:35.888 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=a91bac223ef59525f8bc754eb2b3a616e20ee729] 2025-09-13 00:01:35.891094 | orchestrator | 00:01:35.890 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=5a3f219a-02e3-456c-9d7f-0c5a8049cd2b] 2025-09-13 00:01:35.901768 | orchestrator | 00:01:35.901 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-09-13 00:01:35.902112 | orchestrator | 00:01:35.901 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=6e724704-b413-40a8-af93-f723a1c0b62f] 2025-09-13 00:01:35.902159 | orchestrator | 00:01:35.902 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-09-13 00:01:35.906126 | orchestrator | 00:01:35.905 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=e924364d-2e91-46ce-bd4b-cca5d229d1e6] 2025-09-13 00:01:35.908164 | orchestrator | 00:01:35.908 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-09-13 00:01:35.911437 | orchestrator | 00:01:35.911 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-09-13 00:01:35.919079 | orchestrator | 00:01:35.918 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=0c46d17e-adbc-49dd-8bd7-8befc745e964] 2025-09-13 00:01:35.922420 | orchestrator | 00:01:35.922 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-09-13 00:01:35.927702 | orchestrator | 00:01:35.927 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=e25c372e-2cb9-47f6-a0c5-1defd25ac71c] 2025-09-13 00:01:35.930675 | orchestrator | 00:01:35.930 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-09-13 00:01:35.944862 | orchestrator | 00:01:35.944 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=f868cbab-65ba-4325-b003-03d97073cddb] 2025-09-13 00:01:35.988334 | orchestrator | 00:01:35.988 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=9346358d-8291-41dd-be96-0d8c84c54113] 2025-09-13 00:01:36.713543 | orchestrator | 00:01:36.713 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=76d870f5-d774-430e-afce-ddbf5c522042] 2025-09-13 00:01:36.885659 | orchestrator | 00:01:36.885 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=52af0c2a-1766-44ca-936d-7659697c3b82] 2025-09-13 00:01:36.894453 | orchestrator | 00:01:36.894 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-09-13 00:01:39.326544 | orchestrator | 00:01:39.326 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=6e080a4d-0412-4b1d-8194-d3437a56371d] 2025-09-13 00:01:39.339932 | orchestrator | 00:01:39.339 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=42f67d50-f547-4949-ad70-272b6f024e96] 2025-09-13 00:01:39.386944 | orchestrator | 00:01:39.386 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=e8be299e-6f26-4fcd-9ad7-d2c8303193a1] 2025-09-13 00:01:39.388962 | orchestrator | 00:01:39.388 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=ae0b0158-ec1f-45de-80c9-2bee6f7c9d63] 2025-09-13 00:01:39.406163 | orchestrator | 00:01:39.405 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8] 2025-09-13 00:01:39.428243 | orchestrator | 00:01:39.428 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=868ff441-ad0d-4310-969c-d766af5d9c20] 2025-09-13 00:01:39.882482 | orchestrator | 00:01:39.882 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 3s [id=bbd30818-bf2c-4bee-bb44-855752aa2477] 2025-09-13 00:01:39.893811 | orchestrator | 00:01:39.893 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-09-13 00:01:39.894793 | orchestrator | 00:01:39.894 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-09-13 00:01:39.896000 | orchestrator | 00:01:39.895 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-09-13 00:01:40.101101 | orchestrator | 00:01:40.100 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=67c5462c-0bff-44f0-9cca-82b33a19db38] 2025-09-13 00:01:40.112224 | orchestrator | 00:01:40.110 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-09-13 00:01:40.112328 | orchestrator | 00:01:40.110 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-09-13 00:01:40.112338 | orchestrator | 00:01:40.111 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-09-13 00:01:40.112345 | orchestrator | 00:01:40.111 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-09-13 00:01:40.112352 | orchestrator | 00:01:40.111 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-09-13 00:01:40.115372 | orchestrator | 00:01:40.115 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-09-13 00:01:40.286748 | orchestrator | 00:01:40.286 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=1c0f9fb2-f3af-434f-8ca7-c5d77ab65644] 2025-09-13 00:01:40.475037 | orchestrator | 00:01:40.474 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=bad8b4aa-f948-4d63-bb9a-898404cd2d3b] 2025-09-13 00:01:40.484602 | orchestrator | 00:01:40.484 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-09-13 00:01:40.493361 | orchestrator | 00:01:40.492 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-09-13 00:01:40.493416 | orchestrator | 00:01:40.492 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-09-13 00:01:40.493445 | orchestrator | 00:01:40.492 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=3a95b5cb-189f-4f57-b910-1cab54d8e11f] 2025-09-13 00:01:40.497210 | orchestrator | 00:01:40.497 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-09-13 00:01:40.498102 | orchestrator | 00:01:40.497 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-09-13 00:01:40.674997 | orchestrator | 00:01:40.674 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=9ddc8523-dd93-43d6-b25f-1b13a12c0cbd] 2025-09-13 00:01:40.688629 | orchestrator | 00:01:40.688 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-09-13 00:01:41.027241 | orchestrator | 00:01:41.026 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=98161266-5402-4b80-aa9f-9ee495fcd979] 2025-09-13 00:01:41.041622 | orchestrator | 00:01:41.041 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-09-13 00:01:41.200284 | orchestrator | 00:01:41.200 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=d7797b99-9368-4b69-aea7-3fd34c5b11ca] 2025-09-13 00:01:41.212128 | orchestrator | 00:01:41.211 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-09-13 00:01:41.344974 | orchestrator | 00:01:41.344 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=76419e2a-7bea-4ae0-92b5-0a528df8577c] 2025-09-13 00:01:41.357257 | orchestrator | 00:01:41.357 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-09-13 00:01:41.371172 | orchestrator | 00:01:41.370 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=e6d668e8-82dc-4889-bf1b-c7ad446b5e07] 2025-09-13 00:01:41.383410 | orchestrator | 00:01:41.383 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-09-13 00:01:41.537010 | orchestrator | 00:01:41.536 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 2s [id=d5b6ed7c-a7d4-434d-bfa9-afaf6097afc7] 2025-09-13 00:01:41.916919 | orchestrator | 00:01:41.916 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 2s [id=1d5c51e3-f9df-41f9-a19e-ebc9095ab8d7] 2025-09-13 00:01:41.927479 | orchestrator | 00:01:41.927 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=d1a96899-55df-4018-a638-98fe027b08b5] 2025-09-13 00:01:42.077797 | orchestrator | 00:01:42.077 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=3cc761c5-87d8-4c8f-993c-b8cebfcedaaf] 2025-09-13 00:01:42.096882 | orchestrator | 00:01:42.096 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 2s [id=fb09e69a-87da-4be9-afd6-c363f896c10f] 2025-09-13 00:01:42.106248 | orchestrator | 00:01:42.105 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 2s [id=525d4739-e4ce-4055-8bf4-d463f7e8ee10] 2025-09-13 00:01:42.243200 | orchestrator | 00:01:42.242 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 2s [id=8c6fe8c6-f28b-4e3c-ae18-040730c0be74] 2025-09-13 00:01:42.256035 | orchestrator | 00:01:42.255 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-09-13 00:01:42.316475 | orchestrator | 00:01:42.316 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=42cdd83b-e09f-48be-8646-85c1f53bd945] 2025-09-13 00:01:42.378929 | orchestrator | 00:01:42.378 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=d69cc32c-c14f-4b87-afb9-c04d75247827] 2025-09-13 00:01:42.756606 | orchestrator | 00:01:42.756 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 2s [id=4f722c75-97dc-47a5-8a2a-cff033142743] 2025-09-13 00:01:42.788968 | orchestrator | 00:01:42.788 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-09-13 00:01:42.794339 | orchestrator | 00:01:42.794 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-09-13 00:01:42.797790 | orchestrator | 00:01:42.797 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-09-13 00:01:42.811323 | orchestrator | 00:01:42.811 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-09-13 00:01:42.812943 | orchestrator | 00:01:42.812 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-09-13 00:01:42.813176 | orchestrator | 00:01:42.813 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-09-13 00:01:44.438962 | orchestrator | 00:01:44.438 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=6558c104-53b7-4cc9-80cc-eb98c9177bfc] 2025-09-13 00:01:44.449920 | orchestrator | 00:01:44.449 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-09-13 00:01:44.453973 | orchestrator | 00:01:44.453 STDOUT terraform: local_file.inventory: Creating... 2025-09-13 00:01:44.454749 | orchestrator | 00:01:44.454 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-09-13 00:01:44.463086 | orchestrator | 00:01:44.462 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=f260f347738bed1a25edd4fa0c66c442e3e42267] 2025-09-13 00:01:44.464741 | orchestrator | 00:01:44.464 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=767a157246834d34096f1f8bdf9f4e902ffef2dc] 2025-09-13 00:01:45.293002 | orchestrator | 00:01:45.292 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=6558c104-53b7-4cc9-80cc-eb98c9177bfc] 2025-09-13 00:01:52.790994 | orchestrator | 00:01:52.790 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-09-13 00:01:52.797005 | orchestrator | 00:01:52.796 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-09-13 00:01:52.800203 | orchestrator | 00:01:52.800 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-09-13 00:01:52.815334 | orchestrator | 00:01:52.815 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-09-13 00:01:52.815432 | orchestrator | 00:01:52.815 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-09-13 00:01:52.815714 | orchestrator | 00:01:52.815 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-09-13 00:02:02.794096 | orchestrator | 00:02:02.793 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-09-13 00:02:02.798100 | orchestrator | 00:02:02.797 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-09-13 00:02:02.801304 | orchestrator | 00:02:02.801 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-09-13 00:02:02.815693 | orchestrator | 00:02:02.815 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-09-13 00:02:02.815851 | orchestrator | 00:02:02.815 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-09-13 00:02:02.816788 | orchestrator | 00:02:02.816 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-09-13 00:02:03.495578 | orchestrator | 00:02:03.495 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 20s [id=1434a248-a85a-4240-b093-fd9e59b20c7c] 2025-09-13 00:02:12.794457 | orchestrator | 00:02:12.794 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-09-13 00:02:12.799402 | orchestrator | 00:02:12.799 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-09-13 00:02:12.801644 | orchestrator | 00:02:12.801 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-09-13 00:02:12.815963 | orchestrator | 00:02:12.815 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-09-13 00:02:12.816129 | orchestrator | 00:02:12.815 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-09-13 00:02:13.768061 | orchestrator | 00:02:13.767 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=5bc37cd6-a9eb-4e2b-b6ce-9862970f5968] 2025-09-13 00:02:13.865049 | orchestrator | 00:02:13.864 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=49633e6c-ea46-40cf-9ead-e3c9cd2ff60e] 2025-09-13 00:02:13.939266 | orchestrator | 00:02:13.938 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=34b6302f-ba90-45be-a193-0e98923a6fb9] 2025-09-13 00:02:22.802334 | orchestrator | 00:02:22.801 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2025-09-13 00:02:22.816383 | orchestrator | 00:02:22.816 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2025-09-13 00:02:23.526057 | orchestrator | 00:02:23.525 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 41s [id=23688b09-f1c4-4900-8432-6ada8182c476] 2025-09-13 00:02:24.677655 | orchestrator | 00:02:24.677 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 42s [id=7ea124da-9e3f-4e8c-b22d-e9b05ad4f529] 2025-09-13 00:02:24.697074 | orchestrator | 00:02:24.696 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-09-13 00:02:24.703500 | orchestrator | 00:02:24.703 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=1627479511032112697] 2025-09-13 00:02:24.710397 | orchestrator | 00:02:24.710 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-09-13 00:02:24.711813 | orchestrator | 00:02:24.711 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-09-13 00:02:24.712026 | orchestrator | 00:02:24.711 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-09-13 00:02:24.720215 | orchestrator | 00:02:24.720 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-09-13 00:02:24.724189 | orchestrator | 00:02:24.724 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-09-13 00:02:24.726706 | orchestrator | 00:02:24.726 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-09-13 00:02:24.730134 | orchestrator | 00:02:24.730 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-09-13 00:02:24.733185 | orchestrator | 00:02:24.733 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-09-13 00:02:24.734711 | orchestrator | 00:02:24.734 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-09-13 00:02:24.743153 | orchestrator | 00:02:24.743 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-09-13 00:02:28.148071 | orchestrator | 00:02:28.147 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=49633e6c-ea46-40cf-9ead-e3c9cd2ff60e/5a3f219a-02e3-456c-9d7f-0c5a8049cd2b] 2025-09-13 00:02:28.172297 | orchestrator | 00:02:28.171 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=1434a248-a85a-4240-b093-fd9e59b20c7c/9346358d-8291-41dd-be96-0d8c84c54113] 2025-09-13 00:02:28.200512 | orchestrator | 00:02:28.200 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=1434a248-a85a-4240-b093-fd9e59b20c7c/c5da3e8c-99b7-4761-a17c-7637f0eb6556] 2025-09-13 00:02:28.653992 | orchestrator | 00:02:28.653 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=7ea124da-9e3f-4e8c-b22d-e9b05ad4f529/0c46d17e-adbc-49dd-8bd7-8befc745e964] 2025-09-13 00:02:28.714812 | orchestrator | 00:02:28.714 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=7ea124da-9e3f-4e8c-b22d-e9b05ad4f529/e25c372e-2cb9-47f6-a0c5-1defd25ac71c] 2025-09-13 00:02:34.253790 | orchestrator | 00:02:34.253 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 9s [id=49633e6c-ea46-40cf-9ead-e3c9cd2ff60e/f868cbab-65ba-4325-b003-03d97073cddb] 2025-09-13 00:02:34.409263 | orchestrator | 00:02:34.408 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 9s [id=1434a248-a85a-4240-b093-fd9e59b20c7c/1763dbba-d504-4b6d-865a-93cad2d65fc8] 2025-09-13 00:02:34.704244 | orchestrator | 00:02:34.703 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 10s [id=49633e6c-ea46-40cf-9ead-e3c9cd2ff60e/e924364d-2e91-46ce-bd4b-cca5d229d1e6] 2025-09-13 00:02:34.746777 | orchestrator | 00:02:34.746 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-09-13 00:02:34.750062 | orchestrator | 00:02:34.749 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Still creating... [10s elapsed] 2025-09-13 00:02:34.985329 | orchestrator | 00:02:34.984 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=7ea124da-9e3f-4e8c-b22d-e9b05ad4f529/6e724704-b413-40a8-af93-f723a1c0b62f] 2025-09-13 00:02:44.747317 | orchestrator | 00:02:44.746 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-09-13 00:02:45.303556 | orchestrator | 00:02:45.303 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=9fbd3d34-c0dc-4a33-8464-62d8c6ad38c5] 2025-09-13 00:02:45.348723 | orchestrator | 00:02:45.348 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-09-13 00:02:45.348807 | orchestrator | 00:02:45.348 STDOUT terraform: Outputs: 2025-09-13 00:02:45.348820 | orchestrator | 00:02:45.348 STDOUT terraform: manager_address = 2025-09-13 00:02:45.348828 | orchestrator | 00:02:45.348 STDOUT terraform: private_key = 2025-09-13 00:02:45.609588 | orchestrator | ok: Runtime: 0:01:18.203556 2025-09-13 00:02:45.656083 | 2025-09-13 00:02:45.656357 | TASK [Create infrastructure (stable)] 2025-09-13 00:02:46.196634 | orchestrator | skipping: Conditional result was False 2025-09-13 00:02:46.205462 | 2025-09-13 00:02:46.205585 | TASK [Fetch manager address] 2025-09-13 00:02:46.623996 | orchestrator | ok 2025-09-13 00:02:46.637953 | 2025-09-13 00:02:46.638080 | TASK [Set manager_host address] 2025-09-13 00:02:46.714736 | orchestrator | ok 2025-09-13 00:02:46.721767 | 2025-09-13 00:02:46.721883 | LOOP [Update ansible collections] 2025-09-13 00:02:47.495928 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-13 00:02:47.496330 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-13 00:02:47.496389 | orchestrator | Starting galaxy collection install process 2025-09-13 00:02:47.496429 | orchestrator | Process install dependency map 2025-09-13 00:02:47.496465 | orchestrator | Starting collection install process 2025-09-13 00:02:47.496498 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2025-09-13 00:02:47.496535 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2025-09-13 00:02:47.496574 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-09-13 00:02:47.496639 | orchestrator | ok: Item: commons Runtime: 0:00:00.494006 2025-09-13 00:02:48.276096 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-13 00:02:48.276326 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-13 00:02:48.276394 | orchestrator | Starting galaxy collection install process 2025-09-13 00:02:48.276441 | orchestrator | Process install dependency map 2025-09-13 00:02:48.276486 | orchestrator | Starting collection install process 2025-09-13 00:02:48.276528 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2025-09-13 00:02:48.276571 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2025-09-13 00:02:48.276612 | orchestrator | osism.services:999.0.0 was installed successfully 2025-09-13 00:02:48.276672 | orchestrator | ok: Item: services Runtime: 0:00:00.544173 2025-09-13 00:02:48.297893 | 2025-09-13 00:02:48.298047 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-13 00:02:58.867263 | orchestrator | ok 2025-09-13 00:02:58.879031 | 2025-09-13 00:02:58.879148 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-13 00:03:58.918395 | orchestrator | ok 2025-09-13 00:03:58.928982 | 2025-09-13 00:03:58.929106 | TASK [Fetch manager ssh hostkey] 2025-09-13 00:04:00.499642 | orchestrator | Output suppressed because no_log was given 2025-09-13 00:04:00.515632 | 2025-09-13 00:04:00.515797 | TASK [Get ssh keypair from terraform environment] 2025-09-13 00:04:01.052981 | orchestrator | ok: Runtime: 0:00:00.005584 2025-09-13 00:04:01.061085 | 2025-09-13 00:04:01.061212 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-13 00:04:01.107193 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-09-13 00:04:01.122558 | 2025-09-13 00:04:01.122715 | TASK [Run manager part 0] 2025-09-13 00:04:01.932601 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-13 00:04:01.976846 | orchestrator | 2025-09-13 00:04:01.976907 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-09-13 00:04:01.976921 | orchestrator | 2025-09-13 00:04:01.976946 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-09-13 00:04:03.744346 | orchestrator | ok: [testbed-manager] 2025-09-13 00:04:03.744389 | orchestrator | 2025-09-13 00:04:03.744414 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-13 00:04:03.744426 | orchestrator | 2025-09-13 00:04:03.744436 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-13 00:04:05.469315 | orchestrator | ok: [testbed-manager] 2025-09-13 00:04:05.469354 | orchestrator | 2025-09-13 00:04:05.469361 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-13 00:04:06.069473 | orchestrator | ok: [testbed-manager] 2025-09-13 00:04:06.069503 | orchestrator | 2025-09-13 00:04:06.069510 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-13 00:04:06.106844 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:04:06.106875 | orchestrator | 2025-09-13 00:04:06.106884 | orchestrator | TASK [Update package cache] **************************************************** 2025-09-13 00:04:06.129214 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:04:06.129263 | orchestrator | 2025-09-13 00:04:06.129276 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-13 00:04:06.162246 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:04:06.162283 | orchestrator | 2025-09-13 00:04:06.162290 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-13 00:04:06.193984 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:04:06.194035 | orchestrator | 2025-09-13 00:04:06.194041 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-13 00:04:06.222032 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:04:06.222064 | orchestrator | 2025-09-13 00:04:06.222071 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-09-13 00:04:06.247502 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:04:06.247547 | orchestrator | 2025-09-13 00:04:06.247555 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-09-13 00:04:06.271138 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:04:06.271167 | orchestrator | 2025-09-13 00:04:06.271176 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-09-13 00:04:06.975358 | orchestrator | changed: [testbed-manager] 2025-09-13 00:04:06.975402 | orchestrator | 2025-09-13 00:04:06.975411 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-09-13 00:06:43.960526 | orchestrator | changed: [testbed-manager] 2025-09-13 00:06:43.960641 | orchestrator | 2025-09-13 00:06:43.960660 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-13 00:12:31.667271 | orchestrator | changed: [testbed-manager] 2025-09-13 00:12:31.667441 | orchestrator | 2025-09-13 00:12:31.667493 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-13 00:12:58.962202 | orchestrator | changed: [testbed-manager] 2025-09-13 00:12:58.962292 | orchestrator | 2025-09-13 00:12:58.962309 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-13 00:13:09.247678 | orchestrator | changed: [testbed-manager] 2025-09-13 00:13:09.247707 | orchestrator | 2025-09-13 00:13:09.247714 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-13 00:13:09.277245 | orchestrator | ok: [testbed-manager] 2025-09-13 00:13:09.277272 | orchestrator | 2025-09-13 00:13:09.277279 | orchestrator | TASK [Get current user] ******************************************************** 2025-09-13 00:13:10.037639 | orchestrator | ok: [testbed-manager] 2025-09-13 00:13:10.037674 | orchestrator | 2025-09-13 00:13:10.037684 | orchestrator | TASK [Create venv directory] *************************************************** 2025-09-13 00:13:10.774160 | orchestrator | changed: [testbed-manager] 2025-09-13 00:13:10.774196 | orchestrator | 2025-09-13 00:13:10.774205 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-09-13 00:13:16.924190 | orchestrator | changed: [testbed-manager] 2025-09-13 00:13:16.924258 | orchestrator | 2025-09-13 00:13:16.924283 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-09-13 00:13:23.052016 | orchestrator | changed: [testbed-manager] 2025-09-13 00:13:23.052098 | orchestrator | 2025-09-13 00:13:23.052114 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-09-13 00:13:25.769785 | orchestrator | changed: [testbed-manager] 2025-09-13 00:13:25.769869 | orchestrator | 2025-09-13 00:13:25.769886 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-09-13 00:13:27.542569 | orchestrator | changed: [testbed-manager] 2025-09-13 00:13:27.542680 | orchestrator | 2025-09-13 00:13:27.542710 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-09-13 00:13:28.629078 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-13 00:13:28.629124 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-13 00:13:28.629132 | orchestrator | 2025-09-13 00:13:28.629139 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-09-13 00:13:28.677233 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-13 00:13:28.677306 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-13 00:13:28.677319 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-13 00:13:28.677332 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-13 00:13:31.884189 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-13 00:13:31.884259 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-13 00:13:31.884271 | orchestrator | 2025-09-13 00:13:31.884281 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-09-13 00:13:32.449692 | orchestrator | changed: [testbed-manager] 2025-09-13 00:13:32.449773 | orchestrator | 2025-09-13 00:13:32.449789 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-09-13 00:13:52.383955 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-09-13 00:13:52.384119 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-09-13 00:13:52.384138 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-09-13 00:13:52.384152 | orchestrator | 2025-09-13 00:13:52.384165 | orchestrator | TASK [Install local collections] *********************************************** 2025-09-13 00:13:54.686414 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-09-13 00:13:54.686478 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-09-13 00:13:54.686487 | orchestrator | 2025-09-13 00:13:54.686494 | orchestrator | PLAY [Create operator user] **************************************************** 2025-09-13 00:13:54.686502 | orchestrator | 2025-09-13 00:13:54.686509 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-13 00:13:56.047551 | orchestrator | ok: [testbed-manager] 2025-09-13 00:13:56.047588 | orchestrator | 2025-09-13 00:13:56.047596 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-13 00:13:56.086157 | orchestrator | ok: [testbed-manager] 2025-09-13 00:13:56.086241 | orchestrator | 2025-09-13 00:13:56.086258 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-13 00:13:56.143863 | orchestrator | ok: [testbed-manager] 2025-09-13 00:13:56.143935 | orchestrator | 2025-09-13 00:13:56.143950 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-13 00:13:56.895181 | orchestrator | changed: [testbed-manager] 2025-09-13 00:13:56.895376 | orchestrator | 2025-09-13 00:13:56.895396 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-13 00:13:57.587936 | orchestrator | changed: [testbed-manager] 2025-09-13 00:13:57.588748 | orchestrator | 2025-09-13 00:13:57.588785 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-13 00:13:58.962916 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-09-13 00:13:58.963006 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-09-13 00:13:58.963023 | orchestrator | 2025-09-13 00:13:58.963049 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-13 00:14:00.300463 | orchestrator | changed: [testbed-manager] 2025-09-13 00:14:00.300555 | orchestrator | 2025-09-13 00:14:00.300571 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-13 00:14:02.066833 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-09-13 00:14:02.066916 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-09-13 00:14:02.066930 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-09-13 00:14:02.066941 | orchestrator | 2025-09-13 00:14:02.066954 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-13 00:14:02.124512 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:14:02.124574 | orchestrator | 2025-09-13 00:14:02.124588 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-13 00:14:02.691967 | orchestrator | changed: [testbed-manager] 2025-09-13 00:14:02.692618 | orchestrator | 2025-09-13 00:14:02.692646 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-13 00:14:02.758649 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:14:02.758722 | orchestrator | 2025-09-13 00:14:02.758736 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-13 00:14:03.622933 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-13 00:14:03.623021 | orchestrator | changed: [testbed-manager] 2025-09-13 00:14:03.623036 | orchestrator | 2025-09-13 00:14:03.623048 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-13 00:14:03.661793 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:14:03.661849 | orchestrator | 2025-09-13 00:14:03.661857 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-13 00:14:03.691089 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:14:03.691139 | orchestrator | 2025-09-13 00:14:03.691148 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-13 00:14:03.726338 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:14:03.726381 | orchestrator | 2025-09-13 00:14:03.726389 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-13 00:14:03.776392 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:14:03.776457 | orchestrator | 2025-09-13 00:14:03.776467 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-13 00:14:04.513813 | orchestrator | ok: [testbed-manager] 2025-09-13 00:14:04.513860 | orchestrator | 2025-09-13 00:14:04.513957 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-13 00:14:04.513964 | orchestrator | 2025-09-13 00:14:04.513968 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-13 00:14:05.967482 | orchestrator | ok: [testbed-manager] 2025-09-13 00:14:05.967548 | orchestrator | 2025-09-13 00:14:05.967564 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-09-13 00:14:06.897224 | orchestrator | changed: [testbed-manager] 2025-09-13 00:14:06.897330 | orchestrator | 2025-09-13 00:14:06.897346 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:14:06.897360 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-09-13 00:14:06.897371 | orchestrator | 2025-09-13 00:14:07.057282 | orchestrator | ok: Runtime: 0:10:05.589420 2025-09-13 00:14:07.070599 | 2025-09-13 00:14:07.070723 | TASK [Point out that the log in on the manager is now possible] 2025-09-13 00:14:07.118140 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-09-13 00:14:07.128261 | 2025-09-13 00:14:07.128379 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-13 00:14:07.164045 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-09-13 00:14:07.170737 | 2025-09-13 00:14:07.170894 | TASK [Run manager part 1 + 2] 2025-09-13 00:14:07.945459 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-13 00:14:07.995040 | orchestrator | 2025-09-13 00:14:07.995084 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-09-13 00:14:07.995091 | orchestrator | 2025-09-13 00:14:07.995103 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-13 00:14:10.358238 | orchestrator | ok: [testbed-manager] 2025-09-13 00:14:10.358273 | orchestrator | 2025-09-13 00:14:10.358292 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-13 00:14:10.384515 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:14:10.384547 | orchestrator | 2025-09-13 00:14:10.384555 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-13 00:14:10.415144 | orchestrator | ok: [testbed-manager] 2025-09-13 00:14:10.415268 | orchestrator | 2025-09-13 00:14:10.415281 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-13 00:14:10.462312 | orchestrator | ok: [testbed-manager] 2025-09-13 00:14:10.462348 | orchestrator | 2025-09-13 00:14:10.462357 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-13 00:14:10.533329 | orchestrator | ok: [testbed-manager] 2025-09-13 00:14:10.533365 | orchestrator | 2025-09-13 00:14:10.533374 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-13 00:14:10.586576 | orchestrator | ok: [testbed-manager] 2025-09-13 00:14:10.586610 | orchestrator | 2025-09-13 00:14:10.586619 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-13 00:14:10.623385 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-09-13 00:14:10.623409 | orchestrator | 2025-09-13 00:14:10.623414 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-13 00:14:11.294752 | orchestrator | ok: [testbed-manager] 2025-09-13 00:14:11.294795 | orchestrator | 2025-09-13 00:14:11.294806 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-13 00:14:11.342331 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:14:11.342364 | orchestrator | 2025-09-13 00:14:11.342371 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-13 00:14:12.653356 | orchestrator | changed: [testbed-manager] 2025-09-13 00:14:12.653400 | orchestrator | 2025-09-13 00:14:12.653410 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-13 00:14:13.231317 | orchestrator | ok: [testbed-manager] 2025-09-13 00:14:13.231387 | orchestrator | 2025-09-13 00:14:13.231403 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-13 00:14:14.365062 | orchestrator | changed: [testbed-manager] 2025-09-13 00:14:14.365134 | orchestrator | 2025-09-13 00:14:14.365152 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-13 00:14:31.885997 | orchestrator | changed: [testbed-manager] 2025-09-13 00:14:31.886107 | orchestrator | 2025-09-13 00:14:31.886124 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-13 00:14:32.560229 | orchestrator | ok: [testbed-manager] 2025-09-13 00:14:32.560320 | orchestrator | 2025-09-13 00:14:32.560339 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-13 00:14:32.610759 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:14:32.610826 | orchestrator | 2025-09-13 00:14:32.610839 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-09-13 00:14:33.557262 | orchestrator | changed: [testbed-manager] 2025-09-13 00:14:33.557347 | orchestrator | 2025-09-13 00:14:33.557364 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-09-13 00:14:34.541248 | orchestrator | changed: [testbed-manager] 2025-09-13 00:14:34.541309 | orchestrator | 2025-09-13 00:14:34.541323 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-09-13 00:14:35.147042 | orchestrator | changed: [testbed-manager] 2025-09-13 00:14:35.147101 | orchestrator | 2025-09-13 00:14:35.147116 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-09-13 00:14:35.184840 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-13 00:14:35.184936 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-13 00:14:35.184952 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-13 00:14:35.185076 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-13 00:14:37.488957 | orchestrator | changed: [testbed-manager] 2025-09-13 00:14:37.489029 | orchestrator | 2025-09-13 00:14:37.489045 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-09-13 00:14:45.637556 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-09-13 00:14:45.637647 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-09-13 00:14:45.637665 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-09-13 00:14:45.637677 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-09-13 00:14:45.637696 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-09-13 00:14:45.637708 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-09-13 00:14:45.637719 | orchestrator | 2025-09-13 00:14:45.637732 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-09-13 00:14:46.710582 | orchestrator | changed: [testbed-manager] 2025-09-13 00:14:46.710672 | orchestrator | 2025-09-13 00:14:46.710688 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-09-13 00:14:46.753598 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:14:46.753678 | orchestrator | 2025-09-13 00:14:46.753693 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-09-13 00:14:49.982939 | orchestrator | changed: [testbed-manager] 2025-09-13 00:14:49.983057 | orchestrator | 2025-09-13 00:14:49.983075 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-09-13 00:14:50.027783 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:14:50.027857 | orchestrator | 2025-09-13 00:14:50.027872 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-09-13 00:16:25.236190 | orchestrator | changed: [testbed-manager] 2025-09-13 00:16:25.236354 | orchestrator | 2025-09-13 00:16:25.236375 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-13 00:16:26.400564 | orchestrator | ok: [testbed-manager] 2025-09-13 00:16:26.400667 | orchestrator | 2025-09-13 00:16:26.400700 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:16:26.400728 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-09-13 00:16:26.400747 | orchestrator | 2025-09-13 00:16:26.780681 | orchestrator | ok: Runtime: 0:02:19.039682 2025-09-13 00:16:26.798602 | 2025-09-13 00:16:26.798751 | TASK [Reboot manager] 2025-09-13 00:16:28.335737 | orchestrator | ok: Runtime: 0:00:00.969072 2025-09-13 00:16:28.353906 | 2025-09-13 00:16:28.354050 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-13 00:16:44.774540 | orchestrator | ok 2025-09-13 00:16:44.785219 | 2025-09-13 00:16:44.785346 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-13 00:17:44.832136 | orchestrator | ok 2025-09-13 00:17:44.842794 | 2025-09-13 00:17:44.842958 | TASK [Deploy manager + bootstrap nodes] 2025-09-13 00:17:47.533439 | orchestrator | 2025-09-13 00:17:47.533599 | orchestrator | # DEPLOY MANAGER 2025-09-13 00:17:47.533619 | orchestrator | 2025-09-13 00:17:47.533630 | orchestrator | + set -e 2025-09-13 00:17:47.533641 | orchestrator | + echo 2025-09-13 00:17:47.533653 | orchestrator | + echo '# DEPLOY MANAGER' 2025-09-13 00:17:47.533667 | orchestrator | + echo 2025-09-13 00:17:47.533708 | orchestrator | + cat /opt/manager-vars.sh 2025-09-13 00:17:47.539831 | orchestrator | export NUMBER_OF_NODES=6 2025-09-13 00:17:47.539850 | orchestrator | 2025-09-13 00:17:47.539862 | orchestrator | export CEPH_VERSION=reef 2025-09-13 00:17:47.539873 | orchestrator | export CONFIGURATION_VERSION=main 2025-09-13 00:17:47.539883 | orchestrator | export MANAGER_VERSION=latest 2025-09-13 00:17:47.539901 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-09-13 00:17:47.539910 | orchestrator | 2025-09-13 00:17:47.539925 | orchestrator | export ARA=false 2025-09-13 00:17:47.539935 | orchestrator | export DEPLOY_MODE=manager 2025-09-13 00:17:47.539949 | orchestrator | export TEMPEST=true 2025-09-13 00:17:47.539958 | orchestrator | export IS_ZUUL=true 2025-09-13 00:17:47.539967 | orchestrator | 2025-09-13 00:17:47.539982 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.209 2025-09-13 00:17:47.539992 | orchestrator | export EXTERNAL_API=false 2025-09-13 00:17:47.540001 | orchestrator | 2025-09-13 00:17:47.540010 | orchestrator | export IMAGE_USER=ubuntu 2025-09-13 00:17:47.540022 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-09-13 00:17:47.540030 | orchestrator | 2025-09-13 00:17:47.540039 | orchestrator | export CEPH_STACK=ceph-ansible 2025-09-13 00:17:47.540367 | orchestrator | 2025-09-13 00:17:47.540380 | orchestrator | + echo 2025-09-13 00:17:47.540390 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-13 00:17:47.541630 | orchestrator | ++ export INTERACTIVE=false 2025-09-13 00:17:47.541645 | orchestrator | ++ INTERACTIVE=false 2025-09-13 00:17:47.541656 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-13 00:17:47.541665 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-13 00:17:47.541674 | orchestrator | + source /opt/manager-vars.sh 2025-09-13 00:17:47.541683 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-13 00:17:47.541692 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-13 00:17:47.541700 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-13 00:17:47.541709 | orchestrator | ++ CEPH_VERSION=reef 2025-09-13 00:17:47.541718 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-13 00:17:47.541727 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-13 00:17:47.541736 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-13 00:17:47.541744 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-13 00:17:47.541753 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-13 00:17:47.541768 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-13 00:17:47.541777 | orchestrator | ++ export ARA=false 2025-09-13 00:17:47.541786 | orchestrator | ++ ARA=false 2025-09-13 00:17:47.541795 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-13 00:17:47.541803 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-13 00:17:47.541812 | orchestrator | ++ export TEMPEST=true 2025-09-13 00:17:47.541821 | orchestrator | ++ TEMPEST=true 2025-09-13 00:17:47.541829 | orchestrator | ++ export IS_ZUUL=true 2025-09-13 00:17:47.541838 | orchestrator | ++ IS_ZUUL=true 2025-09-13 00:17:47.541847 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.209 2025-09-13 00:17:47.541856 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.209 2025-09-13 00:17:47.541864 | orchestrator | ++ export EXTERNAL_API=false 2025-09-13 00:17:47.541873 | orchestrator | ++ EXTERNAL_API=false 2025-09-13 00:17:47.541881 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-13 00:17:47.541890 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-13 00:17:47.541899 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-13 00:17:47.541907 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-13 00:17:47.541916 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-13 00:17:47.541925 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-13 00:17:47.541934 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-09-13 00:17:47.603785 | orchestrator | + docker version 2025-09-13 00:17:47.941798 | orchestrator | Client: Docker Engine - Community 2025-09-13 00:17:47.941860 | orchestrator | Version: 27.5.1 2025-09-13 00:17:47.941868 | orchestrator | API version: 1.47 2025-09-13 00:17:47.941875 | orchestrator | Go version: go1.22.11 2025-09-13 00:17:47.941881 | orchestrator | Git commit: 9f9e405 2025-09-13 00:17:47.941886 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-13 00:17:47.941893 | orchestrator | OS/Arch: linux/amd64 2025-09-13 00:17:47.941898 | orchestrator | Context: default 2025-09-13 00:17:47.941903 | orchestrator | 2025-09-13 00:17:47.941909 | orchestrator | Server: Docker Engine - Community 2025-09-13 00:17:47.941914 | orchestrator | Engine: 2025-09-13 00:17:47.941919 | orchestrator | Version: 27.5.1 2025-09-13 00:17:47.941925 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-09-13 00:17:47.941951 | orchestrator | Go version: go1.22.11 2025-09-13 00:17:47.941957 | orchestrator | Git commit: 4c9b3b0 2025-09-13 00:17:47.941962 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-13 00:17:47.941967 | orchestrator | OS/Arch: linux/amd64 2025-09-13 00:17:47.941972 | orchestrator | Experimental: false 2025-09-13 00:17:47.941978 | orchestrator | containerd: 2025-09-13 00:17:47.941983 | orchestrator | Version: 1.7.27 2025-09-13 00:17:47.941988 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-09-13 00:17:47.941994 | orchestrator | runc: 2025-09-13 00:17:47.942007 | orchestrator | Version: 1.2.5 2025-09-13 00:17:47.942012 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-09-13 00:17:47.942047 | orchestrator | docker-init: 2025-09-13 00:17:47.942052 | orchestrator | Version: 0.19.0 2025-09-13 00:17:47.942058 | orchestrator | GitCommit: de40ad0 2025-09-13 00:17:47.947079 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-09-13 00:17:47.958508 | orchestrator | + set -e 2025-09-13 00:17:47.958518 | orchestrator | + source /opt/manager-vars.sh 2025-09-13 00:17:47.958524 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-13 00:17:47.958531 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-13 00:17:47.958541 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-13 00:17:47.958547 | orchestrator | ++ CEPH_VERSION=reef 2025-09-13 00:17:47.958553 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-13 00:17:47.958558 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-13 00:17:47.958566 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-13 00:17:47.958571 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-13 00:17:47.958576 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-13 00:17:47.958582 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-13 00:17:47.958587 | orchestrator | ++ export ARA=false 2025-09-13 00:17:47.958592 | orchestrator | ++ ARA=false 2025-09-13 00:17:47.958597 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-13 00:17:47.958602 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-13 00:17:47.958612 | orchestrator | ++ export TEMPEST=true 2025-09-13 00:17:47.958617 | orchestrator | ++ TEMPEST=true 2025-09-13 00:17:47.958622 | orchestrator | ++ export IS_ZUUL=true 2025-09-13 00:17:47.958627 | orchestrator | ++ IS_ZUUL=true 2025-09-13 00:17:47.958632 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.209 2025-09-13 00:17:47.958637 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.209 2025-09-13 00:17:47.958644 | orchestrator | ++ export EXTERNAL_API=false 2025-09-13 00:17:47.958650 | orchestrator | ++ EXTERNAL_API=false 2025-09-13 00:17:47.958655 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-13 00:17:47.958660 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-13 00:17:47.958669 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-13 00:17:47.958674 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-13 00:17:47.958679 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-13 00:17:47.958684 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-13 00:17:47.958691 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-13 00:17:47.958703 | orchestrator | ++ export INTERACTIVE=false 2025-09-13 00:17:47.958708 | orchestrator | ++ INTERACTIVE=false 2025-09-13 00:17:47.958713 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-13 00:17:47.958744 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-13 00:17:47.959239 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-13 00:17:47.959247 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-13 00:17:47.959252 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-09-13 00:17:47.966804 | orchestrator | + set -e 2025-09-13 00:17:47.966814 | orchestrator | + VERSION=reef 2025-09-13 00:17:47.968201 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-13 00:17:47.975500 | orchestrator | + [[ -n ceph_version: reef ]] 2025-09-13 00:17:47.975511 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-09-13 00:17:47.982939 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-09-13 00:17:47.991076 | orchestrator | + set -e 2025-09-13 00:17:47.991088 | orchestrator | + VERSION=2024.2 2025-09-13 00:17:47.991748 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-13 00:17:47.996547 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-09-13 00:17:47.996564 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-09-13 00:17:48.003064 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-09-13 00:17:48.004190 | orchestrator | ++ semver latest 7.0.0 2025-09-13 00:17:48.068395 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-13 00:17:48.068462 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-13 00:17:48.068470 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-09-13 00:17:48.068477 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-09-13 00:17:48.162526 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-13 00:17:48.170808 | orchestrator | + source /opt/venv/bin/activate 2025-09-13 00:17:48.172016 | orchestrator | ++ deactivate nondestructive 2025-09-13 00:17:48.172025 | orchestrator | ++ '[' -n '' ']' 2025-09-13 00:17:48.172032 | orchestrator | ++ '[' -n '' ']' 2025-09-13 00:17:48.172068 | orchestrator | ++ hash -r 2025-09-13 00:17:48.172075 | orchestrator | ++ '[' -n '' ']' 2025-09-13 00:17:48.172188 | orchestrator | ++ unset VIRTUAL_ENV 2025-09-13 00:17:48.172197 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-09-13 00:17:48.172202 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-09-13 00:17:48.172211 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-09-13 00:17:48.172264 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-09-13 00:17:48.172272 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-09-13 00:17:48.172311 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-09-13 00:17:48.172440 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-13 00:17:48.172454 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-13 00:17:48.172460 | orchestrator | ++ export PATH 2025-09-13 00:17:48.172469 | orchestrator | ++ '[' -n '' ']' 2025-09-13 00:17:48.172475 | orchestrator | ++ '[' -z '' ']' 2025-09-13 00:17:48.172487 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-09-13 00:17:48.172532 | orchestrator | ++ PS1='(venv) ' 2025-09-13 00:17:48.172539 | orchestrator | ++ export PS1 2025-09-13 00:17:48.172545 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-09-13 00:17:48.172551 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-09-13 00:17:48.172621 | orchestrator | ++ hash -r 2025-09-13 00:17:48.172694 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-09-13 00:17:49.633544 | orchestrator | 2025-09-13 00:17:49.633647 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-09-13 00:17:49.633662 | orchestrator | 2025-09-13 00:17:49.633675 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-13 00:17:50.261458 | orchestrator | ok: [testbed-manager] 2025-09-13 00:17:50.261568 | orchestrator | 2025-09-13 00:17:50.261583 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-13 00:17:51.257883 | orchestrator | changed: [testbed-manager] 2025-09-13 00:17:51.257980 | orchestrator | 2025-09-13 00:17:51.257994 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-09-13 00:17:51.258006 | orchestrator | 2025-09-13 00:17:51.258054 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-13 00:17:53.795096 | orchestrator | ok: [testbed-manager] 2025-09-13 00:17:53.795195 | orchestrator | 2025-09-13 00:17:53.795210 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-09-13 00:17:53.851976 | orchestrator | ok: [testbed-manager] 2025-09-13 00:17:53.852011 | orchestrator | 2025-09-13 00:17:53.852027 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-09-13 00:17:54.340846 | orchestrator | changed: [testbed-manager] 2025-09-13 00:17:54.340948 | orchestrator | 2025-09-13 00:17:54.340963 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-09-13 00:17:54.377970 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:17:54.378069 | orchestrator | 2025-09-13 00:17:54.378083 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-13 00:17:54.736899 | orchestrator | changed: [testbed-manager] 2025-09-13 00:17:54.737003 | orchestrator | 2025-09-13 00:17:54.737019 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-09-13 00:17:54.787541 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:17:54.787610 | orchestrator | 2025-09-13 00:17:54.787627 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-09-13 00:17:55.144962 | orchestrator | ok: [testbed-manager] 2025-09-13 00:17:55.145075 | orchestrator | 2025-09-13 00:17:55.145092 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-09-13 00:17:55.283684 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:17:55.283743 | orchestrator | 2025-09-13 00:17:55.283756 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-09-13 00:17:55.283768 | orchestrator | 2025-09-13 00:17:55.283796 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-13 00:17:57.151836 | orchestrator | ok: [testbed-manager] 2025-09-13 00:17:57.151925 | orchestrator | 2025-09-13 00:17:57.151938 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-09-13 00:17:57.254240 | orchestrator | included: osism.services.traefik for testbed-manager 2025-09-13 00:17:57.254290 | orchestrator | 2025-09-13 00:17:57.254302 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-09-13 00:17:57.311382 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-09-13 00:17:57.311412 | orchestrator | 2025-09-13 00:17:57.311470 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-09-13 00:17:58.495111 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-09-13 00:17:58.495205 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-09-13 00:17:58.495220 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-09-13 00:17:58.495232 | orchestrator | 2025-09-13 00:17:58.495244 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-09-13 00:18:00.400834 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-09-13 00:18:00.400946 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-09-13 00:18:00.400976 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-09-13 00:18:00.400989 | orchestrator | 2025-09-13 00:18:00.401001 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-09-13 00:18:01.060355 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-13 00:18:01.060408 | orchestrator | changed: [testbed-manager] 2025-09-13 00:18:01.060461 | orchestrator | 2025-09-13 00:18:01.060474 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-09-13 00:18:01.739375 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-13 00:18:01.739510 | orchestrator | changed: [testbed-manager] 2025-09-13 00:18:01.739524 | orchestrator | 2025-09-13 00:18:01.739535 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-09-13 00:18:01.790324 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:18:01.790352 | orchestrator | 2025-09-13 00:18:01.790363 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-09-13 00:18:02.199411 | orchestrator | ok: [testbed-manager] 2025-09-13 00:18:02.199519 | orchestrator | 2025-09-13 00:18:02.199531 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-09-13 00:18:02.285222 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-09-13 00:18:02.285254 | orchestrator | 2025-09-13 00:18:02.285265 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-09-13 00:18:03.431826 | orchestrator | changed: [testbed-manager] 2025-09-13 00:18:03.431895 | orchestrator | 2025-09-13 00:18:03.431901 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-09-13 00:18:04.341924 | orchestrator | changed: [testbed-manager] 2025-09-13 00:18:04.342062 | orchestrator | 2025-09-13 00:18:04.342079 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-09-13 00:18:15.564200 | orchestrator | changed: [testbed-manager] 2025-09-13 00:18:15.564310 | orchestrator | 2025-09-13 00:18:15.564326 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-09-13 00:18:15.608815 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:18:15.608874 | orchestrator | 2025-09-13 00:18:15.608896 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-09-13 00:18:15.608916 | orchestrator | 2025-09-13 00:18:15.608944 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-13 00:18:17.468642 | orchestrator | ok: [testbed-manager] 2025-09-13 00:18:17.468738 | orchestrator | 2025-09-13 00:18:17.468792 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-09-13 00:18:17.589077 | orchestrator | included: osism.services.manager for testbed-manager 2025-09-13 00:18:17.589157 | orchestrator | 2025-09-13 00:18:17.589171 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-09-13 00:18:17.661969 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-09-13 00:18:17.662078 | orchestrator | 2025-09-13 00:18:17.662093 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-09-13 00:18:20.105822 | orchestrator | ok: [testbed-manager] 2025-09-13 00:18:20.105927 | orchestrator | 2025-09-13 00:18:20.105943 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-09-13 00:18:20.160223 | orchestrator | ok: [testbed-manager] 2025-09-13 00:18:20.160295 | orchestrator | 2025-09-13 00:18:20.160310 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-09-13 00:18:20.290829 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-09-13 00:18:20.290898 | orchestrator | 2025-09-13 00:18:20.290911 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-09-13 00:18:23.009219 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-09-13 00:18:23.009329 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-09-13 00:18:23.009345 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-09-13 00:18:23.009357 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-09-13 00:18:23.009369 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-09-13 00:18:23.009380 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-09-13 00:18:23.009391 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-09-13 00:18:23.009403 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-09-13 00:18:23.009461 | orchestrator | 2025-09-13 00:18:23.009476 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-09-13 00:18:23.610492 | orchestrator | changed: [testbed-manager] 2025-09-13 00:18:23.610583 | orchestrator | 2025-09-13 00:18:23.610597 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-09-13 00:18:24.217701 | orchestrator | changed: [testbed-manager] 2025-09-13 00:18:24.217799 | orchestrator | 2025-09-13 00:18:24.217814 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-09-13 00:18:24.289917 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-09-13 00:18:24.289985 | orchestrator | 2025-09-13 00:18:24.289998 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-09-13 00:18:25.437204 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-09-13 00:18:25.437298 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-09-13 00:18:25.437311 | orchestrator | 2025-09-13 00:18:25.437324 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-09-13 00:18:26.048465 | orchestrator | changed: [testbed-manager] 2025-09-13 00:18:26.048547 | orchestrator | 2025-09-13 00:18:26.048560 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-09-13 00:18:26.104595 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:18:26.104637 | orchestrator | 2025-09-13 00:18:26.104649 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-09-13 00:18:26.177674 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2025-09-13 00:18:26.177709 | orchestrator | 2025-09-13 00:18:26.177722 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2025-09-13 00:18:26.750637 | orchestrator | changed: [testbed-manager] 2025-09-13 00:18:26.750742 | orchestrator | 2025-09-13 00:18:26.750757 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-09-13 00:18:26.806000 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-09-13 00:18:26.806121 | orchestrator | 2025-09-13 00:18:26.806136 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-09-13 00:18:28.068972 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-13 00:18:28.069089 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-13 00:18:28.069104 | orchestrator | changed: [testbed-manager] 2025-09-13 00:18:28.069852 | orchestrator | 2025-09-13 00:18:28.069874 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-09-13 00:18:28.670065 | orchestrator | changed: [testbed-manager] 2025-09-13 00:18:28.670153 | orchestrator | 2025-09-13 00:18:28.670166 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-09-13 00:18:28.720271 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:18:28.720328 | orchestrator | 2025-09-13 00:18:28.720339 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-09-13 00:18:28.813274 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-09-13 00:18:28.813331 | orchestrator | 2025-09-13 00:18:28.813343 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-09-13 00:18:29.317968 | orchestrator | changed: [testbed-manager] 2025-09-13 00:18:29.318092 | orchestrator | 2025-09-13 00:18:29.318106 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-09-13 00:18:29.728197 | orchestrator | changed: [testbed-manager] 2025-09-13 00:18:29.728274 | orchestrator | 2025-09-13 00:18:29.728286 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-09-13 00:18:30.884449 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-09-13 00:18:30.884548 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-09-13 00:18:30.884562 | orchestrator | 2025-09-13 00:18:30.884574 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-09-13 00:18:31.481215 | orchestrator | changed: [testbed-manager] 2025-09-13 00:18:31.481309 | orchestrator | 2025-09-13 00:18:31.481325 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-09-13 00:18:31.869026 | orchestrator | ok: [testbed-manager] 2025-09-13 00:18:31.869122 | orchestrator | 2025-09-13 00:18:31.869139 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-09-13 00:18:32.181193 | orchestrator | changed: [testbed-manager] 2025-09-13 00:18:32.181284 | orchestrator | 2025-09-13 00:18:32.181299 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-09-13 00:18:32.226751 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:18:32.226784 | orchestrator | 2025-09-13 00:18:32.226795 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-09-13 00:18:32.301900 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-09-13 00:18:32.301946 | orchestrator | 2025-09-13 00:18:32.301962 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-09-13 00:18:32.342952 | orchestrator | ok: [testbed-manager] 2025-09-13 00:18:32.342982 | orchestrator | 2025-09-13 00:18:32.342994 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-09-13 00:18:34.337751 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-09-13 00:18:34.337850 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-09-13 00:18:34.337864 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-09-13 00:18:34.337874 | orchestrator | 2025-09-13 00:18:34.337885 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-09-13 00:18:35.096729 | orchestrator | changed: [testbed-manager] 2025-09-13 00:18:35.096816 | orchestrator | 2025-09-13 00:18:35.096831 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-09-13 00:18:35.833924 | orchestrator | changed: [testbed-manager] 2025-09-13 00:18:35.834007 | orchestrator | 2025-09-13 00:18:35.834065 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-09-13 00:18:36.606141 | orchestrator | changed: [testbed-manager] 2025-09-13 00:18:36.606261 | orchestrator | 2025-09-13 00:18:36.606280 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-09-13 00:18:36.685665 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-09-13 00:18:36.685743 | orchestrator | 2025-09-13 00:18:36.685756 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-09-13 00:18:36.741950 | orchestrator | ok: [testbed-manager] 2025-09-13 00:18:36.741995 | orchestrator | 2025-09-13 00:18:36.742009 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-09-13 00:18:37.536051 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-09-13 00:18:37.536141 | orchestrator | 2025-09-13 00:18:37.536155 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-09-13 00:18:37.632335 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-09-13 00:18:37.632378 | orchestrator | 2025-09-13 00:18:37.632390 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-09-13 00:18:38.383580 | orchestrator | changed: [testbed-manager] 2025-09-13 00:18:38.383671 | orchestrator | 2025-09-13 00:18:38.383686 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-09-13 00:18:38.980916 | orchestrator | ok: [testbed-manager] 2025-09-13 00:18:38.981007 | orchestrator | 2025-09-13 00:18:38.981021 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-09-13 00:18:39.037201 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:18:39.037259 | orchestrator | 2025-09-13 00:18:39.037275 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-09-13 00:18:39.098078 | orchestrator | ok: [testbed-manager] 2025-09-13 00:18:39.098140 | orchestrator | 2025-09-13 00:18:39.098153 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-09-13 00:18:39.973684 | orchestrator | changed: [testbed-manager] 2025-09-13 00:18:39.973781 | orchestrator | 2025-09-13 00:18:39.973796 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-09-13 00:20:13.191060 | orchestrator | changed: [testbed-manager] 2025-09-13 00:20:13.191163 | orchestrator | 2025-09-13 00:20:13.191180 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-09-13 00:20:14.239980 | orchestrator | ok: [testbed-manager] 2025-09-13 00:20:14.240090 | orchestrator | 2025-09-13 00:20:14.240118 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-09-13 00:20:14.295279 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:20:14.295362 | orchestrator | 2025-09-13 00:20:14.295379 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-09-13 00:20:42.373905 | orchestrator | changed: [testbed-manager] 2025-09-13 00:20:42.374087 | orchestrator | 2025-09-13 00:20:42.374118 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-09-13 00:20:42.420391 | orchestrator | ok: [testbed-manager] 2025-09-13 00:20:42.420506 | orchestrator | 2025-09-13 00:20:42.420520 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-13 00:20:42.420533 | orchestrator | 2025-09-13 00:20:42.420544 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-09-13 00:20:42.485138 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:20:42.485196 | orchestrator | 2025-09-13 00:20:42.485208 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-09-13 00:21:42.534596 | orchestrator | Pausing for 60 seconds 2025-09-13 00:21:42.534687 | orchestrator | changed: [testbed-manager] 2025-09-13 00:21:42.534703 | orchestrator | 2025-09-13 00:21:42.534716 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-09-13 00:21:46.532870 | orchestrator | changed: [testbed-manager] 2025-09-13 00:21:46.532961 | orchestrator | 2025-09-13 00:21:46.532979 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-09-13 00:22:28.042729 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-09-13 00:22:28.042831 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-09-13 00:22:28.042847 | orchestrator | changed: [testbed-manager] 2025-09-13 00:22:28.042886 | orchestrator | 2025-09-13 00:22:28.042899 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-09-13 00:22:37.404579 | orchestrator | changed: [testbed-manager] 2025-09-13 00:22:37.404683 | orchestrator | 2025-09-13 00:22:37.404702 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-09-13 00:22:37.490445 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-09-13 00:22:37.490525 | orchestrator | 2025-09-13 00:22:37.490541 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-13 00:22:37.490553 | orchestrator | 2025-09-13 00:22:37.490565 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-09-13 00:22:37.546163 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:22:37.546244 | orchestrator | 2025-09-13 00:22:37.546258 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:22:37.546271 | orchestrator | testbed-manager : ok=66 changed=36 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-09-13 00:22:37.546282 | orchestrator | 2025-09-13 00:22:37.657501 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-13 00:22:37.657581 | orchestrator | + deactivate 2025-09-13 00:22:37.657598 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-09-13 00:22:37.657612 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-13 00:22:37.657624 | orchestrator | + export PATH 2025-09-13 00:22:37.657636 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-09-13 00:22:37.657648 | orchestrator | + '[' -n '' ']' 2025-09-13 00:22:37.657660 | orchestrator | + hash -r 2025-09-13 00:22:37.657684 | orchestrator | + '[' -n '' ']' 2025-09-13 00:22:37.657697 | orchestrator | + unset VIRTUAL_ENV 2025-09-13 00:22:37.657708 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-09-13 00:22:37.657719 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-09-13 00:22:37.657730 | orchestrator | + unset -f deactivate 2025-09-13 00:22:37.657864 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-09-13 00:22:37.667746 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-13 00:22:37.667788 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-13 00:22:37.667800 | orchestrator | + local max_attempts=60 2025-09-13 00:22:37.667812 | orchestrator | + local name=ceph-ansible 2025-09-13 00:22:37.667824 | orchestrator | + local attempt_num=1 2025-09-13 00:22:37.668911 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-13 00:22:37.709139 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-13 00:22:37.709256 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-13 00:22:37.709425 | orchestrator | + local max_attempts=60 2025-09-13 00:22:37.709446 | orchestrator | + local name=kolla-ansible 2025-09-13 00:22:37.709458 | orchestrator | + local attempt_num=1 2025-09-13 00:22:37.709881 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-13 00:22:37.749316 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-13 00:22:37.749354 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-13 00:22:37.749366 | orchestrator | + local max_attempts=60 2025-09-13 00:22:37.749378 | orchestrator | + local name=osism-ansible 2025-09-13 00:22:37.749389 | orchestrator | + local attempt_num=1 2025-09-13 00:22:37.750126 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-13 00:22:37.791921 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-13 00:22:37.791956 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-13 00:22:37.791969 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-13 00:22:38.517025 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-09-13 00:22:38.745717 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-09-13 00:22:38.745792 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-09-13 00:22:38.745803 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-09-13 00:22:38.745827 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-09-13 00:22:38.745837 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-09-13 00:22:38.745852 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-09-13 00:22:38.745860 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-09-13 00:22:38.745867 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2025-09-13 00:22:38.745874 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-09-13 00:22:38.745881 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-09-13 00:22:38.745888 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-09-13 00:22:38.745895 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-09-13 00:22:38.745903 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-09-13 00:22:38.746011 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2025-09-13 00:22:38.746058 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-09-13 00:22:38.746066 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-09-13 00:22:38.755427 | orchestrator | ++ semver latest 7.0.0 2025-09-13 00:22:38.817167 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-13 00:22:38.817271 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-13 00:22:38.817287 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-09-13 00:22:38.822457 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-09-13 00:22:51.099750 | orchestrator | 2025-09-13 00:22:51 | INFO  | Task e98a7535-b00e-451d-aa5f-e3ba64cc9d61 (resolvconf) was prepared for execution. 2025-09-13 00:22:51.099845 | orchestrator | 2025-09-13 00:22:51 | INFO  | It takes a moment until task e98a7535-b00e-451d-aa5f-e3ba64cc9d61 (resolvconf) has been started and output is visible here. 2025-09-13 00:23:05.053128 | orchestrator | 2025-09-13 00:23:05.053296 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-09-13 00:23:05.053316 | orchestrator | 2025-09-13 00:23:05.053328 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-13 00:23:05.053364 | orchestrator | Saturday 13 September 2025 00:22:55 +0000 (0:00:00.162) 0:00:00.162 **** 2025-09-13 00:23:05.053377 | orchestrator | ok: [testbed-manager] 2025-09-13 00:23:05.053389 | orchestrator | 2025-09-13 00:23:05.053401 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-13 00:23:05.053412 | orchestrator | Saturday 13 September 2025 00:22:58 +0000 (0:00:03.813) 0:00:03.975 **** 2025-09-13 00:23:05.053423 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:23:05.053435 | orchestrator | 2025-09-13 00:23:05.053446 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-13 00:23:05.053457 | orchestrator | Saturday 13 September 2025 00:22:58 +0000 (0:00:00.068) 0:00:04.044 **** 2025-09-13 00:23:05.053468 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-09-13 00:23:05.053479 | orchestrator | 2025-09-13 00:23:05.053490 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-13 00:23:05.053501 | orchestrator | Saturday 13 September 2025 00:22:59 +0000 (0:00:00.094) 0:00:04.138 **** 2025-09-13 00:23:05.053512 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-09-13 00:23:05.053523 | orchestrator | 2025-09-13 00:23:05.053534 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-13 00:23:05.053545 | orchestrator | Saturday 13 September 2025 00:22:59 +0000 (0:00:00.083) 0:00:04.222 **** 2025-09-13 00:23:05.053556 | orchestrator | ok: [testbed-manager] 2025-09-13 00:23:05.053566 | orchestrator | 2025-09-13 00:23:05.053577 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-13 00:23:05.053588 | orchestrator | Saturday 13 September 2025 00:23:00 +0000 (0:00:01.138) 0:00:05.360 **** 2025-09-13 00:23:05.053599 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:23:05.053610 | orchestrator | 2025-09-13 00:23:05.053621 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-13 00:23:05.053631 | orchestrator | Saturday 13 September 2025 00:23:00 +0000 (0:00:00.072) 0:00:05.433 **** 2025-09-13 00:23:05.053642 | orchestrator | ok: [testbed-manager] 2025-09-13 00:23:05.053653 | orchestrator | 2025-09-13 00:23:05.053664 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-13 00:23:05.053674 | orchestrator | Saturday 13 September 2025 00:23:00 +0000 (0:00:00.464) 0:00:05.898 **** 2025-09-13 00:23:05.053685 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:23:05.053696 | orchestrator | 2025-09-13 00:23:05.053707 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-13 00:23:05.053718 | orchestrator | Saturday 13 September 2025 00:23:00 +0000 (0:00:00.094) 0:00:05.992 **** 2025-09-13 00:23:05.053729 | orchestrator | changed: [testbed-manager] 2025-09-13 00:23:05.053740 | orchestrator | 2025-09-13 00:23:05.053751 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-13 00:23:05.053761 | orchestrator | Saturday 13 September 2025 00:23:01 +0000 (0:00:00.543) 0:00:06.536 **** 2025-09-13 00:23:05.053772 | orchestrator | changed: [testbed-manager] 2025-09-13 00:23:05.053783 | orchestrator | 2025-09-13 00:23:05.053794 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-13 00:23:05.053805 | orchestrator | Saturday 13 September 2025 00:23:02 +0000 (0:00:01.081) 0:00:07.617 **** 2025-09-13 00:23:05.053815 | orchestrator | ok: [testbed-manager] 2025-09-13 00:23:05.053826 | orchestrator | 2025-09-13 00:23:05.053837 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-13 00:23:05.053848 | orchestrator | Saturday 13 September 2025 00:23:03 +0000 (0:00:00.967) 0:00:08.585 **** 2025-09-13 00:23:05.053870 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-09-13 00:23:05.053890 | orchestrator | 2025-09-13 00:23:05.053901 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-13 00:23:05.053912 | orchestrator | Saturday 13 September 2025 00:23:03 +0000 (0:00:00.091) 0:00:08.677 **** 2025-09-13 00:23:05.053922 | orchestrator | changed: [testbed-manager] 2025-09-13 00:23:05.053933 | orchestrator | 2025-09-13 00:23:05.053944 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:23:05.053955 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-13 00:23:05.053967 | orchestrator | 2025-09-13 00:23:05.053978 | orchestrator | 2025-09-13 00:23:05.053988 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:23:05.053999 | orchestrator | Saturday 13 September 2025 00:23:04 +0000 (0:00:01.190) 0:00:09.867 **** 2025-09-13 00:23:05.054010 | orchestrator | =============================================================================== 2025-09-13 00:23:05.054072 | orchestrator | Gathering Facts --------------------------------------------------------- 3.81s 2025-09-13 00:23:05.054084 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.19s 2025-09-13 00:23:05.054095 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.14s 2025-09-13 00:23:05.054106 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.08s 2025-09-13 00:23:05.054116 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.97s 2025-09-13 00:23:05.054127 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.54s 2025-09-13 00:23:05.054157 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.46s 2025-09-13 00:23:05.054169 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2025-09-13 00:23:05.054205 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-09-13 00:23:05.054217 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-09-13 00:23:05.054228 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2025-09-13 00:23:05.054239 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2025-09-13 00:23:05.054250 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-09-13 00:23:05.352726 | orchestrator | + osism apply sshconfig 2025-09-13 00:23:17.501737 | orchestrator | 2025-09-13 00:23:17 | INFO  | Task d51cc13e-bd1b-493f-8616-f2809407460e (sshconfig) was prepared for execution. 2025-09-13 00:23:17.501850 | orchestrator | 2025-09-13 00:23:17 | INFO  | It takes a moment until task d51cc13e-bd1b-493f-8616-f2809407460e (sshconfig) has been started and output is visible here. 2025-09-13 00:23:28.529764 | orchestrator | 2025-09-13 00:23:28.529898 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-09-13 00:23:28.529916 | orchestrator | 2025-09-13 00:23:28.529929 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-09-13 00:23:28.529941 | orchestrator | Saturday 13 September 2025 00:23:21 +0000 (0:00:00.147) 0:00:00.147 **** 2025-09-13 00:23:28.529952 | orchestrator | ok: [testbed-manager] 2025-09-13 00:23:28.529964 | orchestrator | 2025-09-13 00:23:28.529976 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-09-13 00:23:28.529987 | orchestrator | Saturday 13 September 2025 00:23:21 +0000 (0:00:00.570) 0:00:00.718 **** 2025-09-13 00:23:28.529997 | orchestrator | changed: [testbed-manager] 2025-09-13 00:23:28.530009 | orchestrator | 2025-09-13 00:23:28.530072 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-09-13 00:23:28.530085 | orchestrator | Saturday 13 September 2025 00:23:22 +0000 (0:00:00.512) 0:00:01.231 **** 2025-09-13 00:23:28.530096 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-09-13 00:23:28.530107 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-09-13 00:23:28.530146 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-09-13 00:23:28.530157 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-09-13 00:23:28.530168 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-09-13 00:23:28.530226 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-09-13 00:23:28.530238 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-09-13 00:23:28.530249 | orchestrator | 2025-09-13 00:23:28.530260 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-09-13 00:23:28.530271 | orchestrator | Saturday 13 September 2025 00:23:27 +0000 (0:00:05.592) 0:00:06.824 **** 2025-09-13 00:23:28.530282 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:23:28.530292 | orchestrator | 2025-09-13 00:23:28.530303 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-09-13 00:23:28.530316 | orchestrator | Saturday 13 September 2025 00:23:27 +0000 (0:00:00.058) 0:00:06.883 **** 2025-09-13 00:23:28.530328 | orchestrator | changed: [testbed-manager] 2025-09-13 00:23:28.530341 | orchestrator | 2025-09-13 00:23:28.530353 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:23:28.530368 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-13 00:23:28.530382 | orchestrator | 2025-09-13 00:23:28.530394 | orchestrator | 2025-09-13 00:23:28.530407 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:23:28.530420 | orchestrator | Saturday 13 September 2025 00:23:28 +0000 (0:00:00.524) 0:00:07.407 **** 2025-09-13 00:23:28.530432 | orchestrator | =============================================================================== 2025-09-13 00:23:28.530445 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.59s 2025-09-13 00:23:28.530458 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.57s 2025-09-13 00:23:28.530470 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.52s 2025-09-13 00:23:28.530482 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.51s 2025-09-13 00:23:28.530494 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.06s 2025-09-13 00:23:28.711903 | orchestrator | + osism apply known-hosts 2025-09-13 00:23:40.454437 | orchestrator | 2025-09-13 00:23:40 | INFO  | Task 95e64df5-471a-45ed-b3b9-c7be49904bc9 (known-hosts) was prepared for execution. 2025-09-13 00:23:40.454554 | orchestrator | 2025-09-13 00:23:40 | INFO  | It takes a moment until task 95e64df5-471a-45ed-b3b9-c7be49904bc9 (known-hosts) has been started and output is visible here. 2025-09-13 00:23:56.757651 | orchestrator | 2025-09-13 00:23:56.757766 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-09-13 00:23:56.757783 | orchestrator | 2025-09-13 00:23:56.757795 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-09-13 00:23:56.757807 | orchestrator | Saturday 13 September 2025 00:23:44 +0000 (0:00:00.167) 0:00:00.167 **** 2025-09-13 00:23:56.757819 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-13 00:23:56.757830 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-13 00:23:56.757841 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-13 00:23:56.757852 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-13 00:23:56.757863 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-13 00:23:56.757874 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-13 00:23:56.757884 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-13 00:23:56.757895 | orchestrator | 2025-09-13 00:23:56.757906 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-09-13 00:23:56.757918 | orchestrator | Saturday 13 September 2025 00:23:50 +0000 (0:00:05.971) 0:00:06.138 **** 2025-09-13 00:23:56.757953 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-13 00:23:56.757967 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-13 00:23:56.757978 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-13 00:23:56.757989 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-13 00:23:56.758000 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-13 00:23:56.758068 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-13 00:23:56.758094 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-13 00:23:56.758104 | orchestrator | 2025-09-13 00:23:56.758116 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-13 00:23:56.758127 | orchestrator | Saturday 13 September 2025 00:23:50 +0000 (0:00:00.171) 0:00:06.310 **** 2025-09-13 00:23:56.758139 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGTiBAQWC8sDXij/CvTESx3NDZj9dMNLlsvxv+ToJ6hoYVytRzbLJ/tiC7yDF+/wyHjISvhC3V5Yn/movnpU/LE=) 2025-09-13 00:23:56.758155 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDYNVS7UGi5yx2q+8LDwzScqiJA/OXOnbXSZpTMQFRlXr5Lq2K2DnhHWLLJfF+YMKFUZvYgxUiQxAX2FRPBa2eUEIOawxLQpXNDBkrB6xZV0a52+i/I2Gx9tVxMQVQ77A/dMzXXKzVgnEN+w3KtqR6zAAlCzYZniPm5zNvwSbeRHXCb2t8c5vzbSGQQNzyrEWNVbt+hcXE4QscgjBcxvRlpni5wOvBKqBixkbTKsqFu8poLxaoQ/Roatje5dGQEfM12Sbwem27LeS+WvHTHa6VrXJ+al6KiJ4ei/HsZHoRRzbA48SOnhw142Umr3bGltIEia39GHhyPWaLW3kHsFPERlFRhJ6vykT6kxrV3jA1v7KN9NsOciCgotfQ68F3zGa8kx42+3TiciqJ6doEWbZhZEvBUquvbnmr25skhE44YuVsTWoPXZCFiPvWN8beGomj3FeRmwIHfws7N4AyqjIbGMnv0lYmRpsBuKQS+TzN1IHIjyaYKyWFXJJyX8kKXLlE=) 2025-09-13 00:23:56.758196 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGAMXAQYMqsuNP1pGVUALQgJ199GM8jX+kFEAlmS7BJk) 2025-09-13 00:23:56.758212 | orchestrator | 2025-09-13 00:23:56.758225 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-13 00:23:56.758238 | orchestrator | Saturday 13 September 2025 00:23:51 +0000 (0:00:01.093) 0:00:07.404 **** 2025-09-13 00:23:56.758270 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6nhiLhCChpoA1Y2f6XRMajOvUrqxVoQ9jMPCvTGptBpT1ilCbQg0dBZaLHv+dbzNCX9FKRQsa/UWJwWhr1GfHU7mHX1IAknCgma819ksTkM7o+MYsm6wrJx7X8dAwdVM7+4UjXJ/SqU/hKcIh31G1IKwwvRaDgcUfJuEMC3oCihjKAbSVSEYOMrVwKN1oF5Agz0L71uCEYE2RSPyw0XwDOSeLSX8Wf44D8931f3K6YGrngONBIyXAVWifH3dA+2dSdGj1nUCoPQgTRQT39C+ArXjkNqppv4Fjm0NzyIHDl6phNHAky/0krkRrqsNyLN7Na/VksIycOT1stAemG2FloEQR5aH6/RDg9US5sWe12MHo4OL91lg4bVtUtX3vXwS9AxkZ48Oga2lRUDYv/lrdInKy4zne4T67hY35kDEsQDX2oHhHcq6RfGIcE6N6wSJG8AAxR0bP7vGm5ummAj4z+XPRxO5mGa7qNJWOMytGsEiEyZ4ythiceblCjpbMTTc=) 2025-09-13 00:23:56.758284 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPjjvbRHQ8Miy3HTvIqdjbt3hxsOzmwP/E3dhHkR42VUjmdQebVa9u/VgdAc+hQhiLnsGHCbCz5lm2YUOAg2JIo=) 2025-09-13 00:23:56.758298 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOaMrUlGQw/Jfkg+84tbtq6W8YPzn1D69BXnuOTr1i3h) 2025-09-13 00:23:56.758321 | orchestrator | 2025-09-13 00:23:56.758333 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-13 00:23:56.758346 | orchestrator | Saturday 13 September 2025 00:23:52 +0000 (0:00:00.938) 0:00:08.343 **** 2025-09-13 00:23:56.758358 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHJCl/kjVg+/d+TKZDygDCyn5+ge0TykNUiyWz2+A0w58eGnSeiDpVW+zcwoEQk6/4+IxzckHeGxFYKS8IvGCIE=) 2025-09-13 00:23:56.758370 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILH1yzxJtkNZjpxkgPJXPg6sKTFndAt7CFade/DADw+H) 2025-09-13 00:23:56.758383 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCw4Tltx/EtygrP6KYi4y1a2GNbma2Z0FCFqm9UibceB6N8DZ0k53N4Ps93s94I2hoSKImZdzAdl3xWFFo+qWrOwDmH9NB/IGkqdj2gR85aSJcSa0z9bhT1mFMn15J6epkavO0WmFLvhM0N6BbPFmh/7FWVGJ9NmwSpkrRaOrRSRQEKZa7thNLzJhri2O9ZlGPZmAEKDmWQ5pmZslKGwO2+QS1ZlRO2zLp1coRk2Ea+H+Oj2uyU6AS/aKDdjZlqWqMShEkOnayu0rjSmUjmT4CqchnidJirqDGtepvVKEGqXUxLaS5JgzLP7JmPhLJ5/fT6ka0gmwsZnFj/jUSxU/mNQlISj5Ix5YwC7JLu4Jlm5VMr9zXvNQg4RniyCoH6xwN0VP92hWoIv7ajxA/ex1JKS8RkonZACDf/YgRP+vqQZKShKdsGy6iPx9NNuiMQwl1FnL+40YU0zjA8/TMQ42nJBc33o4JDMUVRqRlSthm8I0OOJk/LB4HJ9xgd7oRX05s=) 2025-09-13 00:23:56.758396 | orchestrator | 2025-09-13 00:23:56.758409 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-13 00:23:56.758421 | orchestrator | Saturday 13 September 2025 00:23:53 +0000 (0:00:01.026) 0:00:09.370 **** 2025-09-13 00:23:56.758503 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC65gmBOiN+RyxI3ap351oTOur/zq1ed7/pdJ5rLmECohs4txoYnLA8N221CL3QFoFzS+CVj+On8OaoenCr/KtcOsDrJibfXHjHbi2rOQNTwZ6Aobm9UcPoQoBA0yRivrG5EfG+3NqU2CjSkDVSz8BrrGxhW3sQtcls0JlabQhCGc+/88zLnEMSIG4z6sgkjHJaKBLy5PKkhVJlny04XUgiEFZq+Vsx6oKnj2JbRe5+EZIptOTRN3mzG5N3U8bhDn4qlGObC6sFrL6XOccoAlj3FcA2YvByPW06lnORm2K5a3+kVA5cf30GcacJFvl3ipVwG3hTx3qr68hQ6mZwADboXQC2XNf6I3d/frkmvvL4nVyaGxEFaTx+Nb/ejSEcqluH2IMBysPcNN4jqZH/BLSYGZoGesplvaD8+5eAtBPdci+zhJNshYjyuX89miFosG65X+oXTvhmgZKbwvEo3C+04unLmAjDC2BN3Z0Glb2HcR0Gi0vhoggp6m/1emV+/wM=) 2025-09-13 00:23:56.758518 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHpVVc4y6YDc7Emkc6EjpKpvqmJ0x2BJBxMOVcJnZ6BF2+F9BIb0qtt0ocr4Y75XVX2MXRnbuxHcWD9rPt7XRmU=) 2025-09-13 00:23:56.758532 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJs+J2L2IESNobypCjNU9f1BYeAKeE1HWm16BMYIZySr) 2025-09-13 00:23:56.758543 | orchestrator | 2025-09-13 00:23:56.758554 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-13 00:23:56.758565 | orchestrator | Saturday 13 September 2025 00:23:54 +0000 (0:00:01.053) 0:00:10.423 **** 2025-09-13 00:23:56.758576 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZCToWNCQ53OBXXsW8FYbsTyQDrkmrLIZge9SXAf5wQXT6TUGTHP+PsgDDFl+crKkhqajBfYTvg4o9aAMFWWmQE3z4SkcAP5Qh1mmmygY8yLG6peM+TIqiLA3yiHcA2PIcV7iU2PNe7PndAirJaC6Z+R38CqVKY+PGvn5FbBB/kU33vlj3TeDdSPo7weRdnMtSrbN/FcaaGiphxozDVbQFpNrfmj4XK+d3KR2eeaNXp81J5AAapJOk2v8x/lznFpSS1Mf0rxHWhW5H/HYOO/ZvgW7QnYZKYy6qOlUcsOKGgVplO+8zRhiDUg5IQewKmqu29Nss2r5wn4rNI6WND3Srr3D59oqzfr4j66JqViJ4UgpHHvLnW17KHhbaLfTyxHGJkAWCQA8tyd5b95v7M3kTcQA66jNwt1Dj+rSZwP0zGLKExNk61eGcGYT6nYmqDO2nSpY2oExJ8SM2X23+3Mvo+hBsqZnrrKrAggXLqmueI67Ml95D7angW42RixxdnvM=) 2025-09-13 00:23:56.758587 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPYjb3uzBS3lduJUAMWfJOsiJ36wcnBhxi02CFn5xw2xXUIOuPp86LSHCBMyZW4hQPxiEtqhGTwWJL1h/nQAHHg=) 2025-09-13 00:23:56.758598 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAOmnlfEP4Tfzi0jGpcSy2ljrYq43Sx7SijVkqWfMSJi) 2025-09-13 00:23:56.758617 | orchestrator | 2025-09-13 00:23:56.758628 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-13 00:23:56.758639 | orchestrator | Saturday 13 September 2025 00:23:55 +0000 (0:00:01.095) 0:00:11.519 **** 2025-09-13 00:23:56.758660 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCn2RCQju5eTyEsL997PF9Vg1sWTvTdai4eMUnLM4lhtABHgDFwcBGFt5cobSFt/R/KvaoKQPQvtkwtRXAd32n986SnMHIXdOZNcCk8eLz8Bx3imA/k/CggeTefHzXnwx1ry0Hw9qJgSAy5Ht8sZrqCQcssEUbYpy4uKCVCgB3yVPGEQRNboaNKzId+B+6GyYn4dJhPff1+y/sf6Hao7FS0NJa1QxEQ2+unQTH4cB/vGg/woNXOImx7cxO1CO83kBaPsgVhSFK00stcQR20j1UqWNTO3M11LQr32kWu8jxQs12vLJ2LKs5AyUwz6TLX8XH3JeKaC/ZYc3vr2L40g++A2BWdXA8cDK5cdO2zrfVcahHRHvxmSZDay1xDetg4FI4a3ntszU1MBSJf1qwlemzfUc8CmsM1MpVGm08IYFxIMDInCh1gzMNjmWgnjeA7BwQwuyHIgtx9mCwef779xRBlrVGZ+CmCJEjIMOZMHZmEAfceYXVPOoww692Zwmsd0KE=) 2025-09-13 00:24:08.076027 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAH1JHmJXfgK6h1AfB3b0K/fA++/xfcQCQKTSM/rePPn) 2025-09-13 00:24:08.076146 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNokz/w+oLs8xsLGyXRxOwoAxbI2byJktTrvXp9SAmsj6AFB7hNVv0mAhaN1BkSz8CIYDDMhuxNkNfUB35SPtZM=) 2025-09-13 00:24:08.076164 | orchestrator | 2025-09-13 00:24:08.076220 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-13 00:24:08.076234 | orchestrator | Saturday 13 September 2025 00:23:56 +0000 (0:00:01.056) 0:00:12.575 **** 2025-09-13 00:24:08.076249 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAsxgESf5pePrRkAUO81n9lrKwPj39TYegPvDHXaUuhs2q7bVVjE8fgeLGfHm+iCK794jaZYQK2HLwclS5lw/dwZ0lycDo24hvTeLNTUlFAKvJixbsSdAgqe7I0YKabgwwMSIO0X1nZ3lOW1kn68wx1y9FStZGVTZ0owbeJlJX4Qu7PLtxDUcbm1bKxK0KhoeZY92RWzqvcttPyfQ8GfLPbEg5zNq06wt7+Wntr5/S8ojf7BGNcNKLwGDNVZbHB/xhFZSvncvW45SDDj7VKpBK5Lt6552sBfUJUz9nl6DUI/d07FWFCraeF9fSwKYA+eUDyEZ/fPtCksexnqZfpWD0MHNcm1IXbT5QGL4yPG3oDFZsTjjIgnFQV2Wc7CH2lXAEolblKDzCovgvKk/b406n2U3MYSBnYt3ZTIfgiC1bZ5OsWqUOBqBmi4gAABoCNuFuOsbEbsTt3sVAS6Uw9eQDtEBAt2s7Ef1zYgslIE2HdzI2YiORRSQnFCPC1rbSUqE=) 2025-09-13 00:24:08.076264 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKRtqlrfJTWkBqPYYRs0sYbaD+S2TD+Frcn4kjcX+i0veYaOQthxDrrVkXfvx5n84wUDI+G1eMBzt5IYc84AGiM=) 2025-09-13 00:24:08.076276 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE33J6Gz/Rnv/F9vR8TBAaxtyRkGS5q9WsNkod8vXKjM) 2025-09-13 00:24:08.076287 | orchestrator | 2025-09-13 00:24:08.076299 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-09-13 00:24:08.076311 | orchestrator | Saturday 13 September 2025 00:23:57 +0000 (0:00:01.101) 0:00:13.677 **** 2025-09-13 00:24:08.076323 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-13 00:24:08.076334 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-13 00:24:08.076345 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-13 00:24:08.076356 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-13 00:24:08.076367 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-13 00:24:08.076378 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-13 00:24:08.076389 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-13 00:24:08.076399 | orchestrator | 2025-09-13 00:24:08.076411 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-09-13 00:24:08.076443 | orchestrator | Saturday 13 September 2025 00:24:02 +0000 (0:00:05.115) 0:00:18.792 **** 2025-09-13 00:24:08.076456 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-13 00:24:08.076470 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-13 00:24:08.076506 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-13 00:24:08.076518 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-13 00:24:08.076529 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-13 00:24:08.076540 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-13 00:24:08.076551 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-13 00:24:08.076564 | orchestrator | 2025-09-13 00:24:08.076577 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-13 00:24:08.076590 | orchestrator | Saturday 13 September 2025 00:24:03 +0000 (0:00:00.154) 0:00:18.947 **** 2025-09-13 00:24:08.076603 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGAMXAQYMqsuNP1pGVUALQgJ199GM8jX+kFEAlmS7BJk) 2025-09-13 00:24:08.076641 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDYNVS7UGi5yx2q+8LDwzScqiJA/OXOnbXSZpTMQFRlXr5Lq2K2DnhHWLLJfF+YMKFUZvYgxUiQxAX2FRPBa2eUEIOawxLQpXNDBkrB6xZV0a52+i/I2Gx9tVxMQVQ77A/dMzXXKzVgnEN+w3KtqR6zAAlCzYZniPm5zNvwSbeRHXCb2t8c5vzbSGQQNzyrEWNVbt+hcXE4QscgjBcxvRlpni5wOvBKqBixkbTKsqFu8poLxaoQ/Roatje5dGQEfM12Sbwem27LeS+WvHTHa6VrXJ+al6KiJ4ei/HsZHoRRzbA48SOnhw142Umr3bGltIEia39GHhyPWaLW3kHsFPERlFRhJ6vykT6kxrV3jA1v7KN9NsOciCgotfQ68F3zGa8kx42+3TiciqJ6doEWbZhZEvBUquvbnmr25skhE44YuVsTWoPXZCFiPvWN8beGomj3FeRmwIHfws7N4AyqjIbGMnv0lYmRpsBuKQS+TzN1IHIjyaYKyWFXJJyX8kKXLlE=) 2025-09-13 00:24:08.076655 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGTiBAQWC8sDXij/CvTESx3NDZj9dMNLlsvxv+ToJ6hoYVytRzbLJ/tiC7yDF+/wyHjISvhC3V5Yn/movnpU/LE=) 2025-09-13 00:24:08.076669 | orchestrator | 2025-09-13 00:24:08.076687 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-13 00:24:08.076707 | orchestrator | Saturday 13 September 2025 00:24:04 +0000 (0:00:00.972) 0:00:19.919 **** 2025-09-13 00:24:08.076726 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPjjvbRHQ8Miy3HTvIqdjbt3hxsOzmwP/E3dhHkR42VUjmdQebVa9u/VgdAc+hQhiLnsGHCbCz5lm2YUOAg2JIo=) 2025-09-13 00:24:08.076758 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6nhiLhCChpoA1Y2f6XRMajOvUrqxVoQ9jMPCvTGptBpT1ilCbQg0dBZaLHv+dbzNCX9FKRQsa/UWJwWhr1GfHU7mHX1IAknCgma819ksTkM7o+MYsm6wrJx7X8dAwdVM7+4UjXJ/SqU/hKcIh31G1IKwwvRaDgcUfJuEMC3oCihjKAbSVSEYOMrVwKN1oF5Agz0L71uCEYE2RSPyw0XwDOSeLSX8Wf44D8931f3K6YGrngONBIyXAVWifH3dA+2dSdGj1nUCoPQgTRQT39C+ArXjkNqppv4Fjm0NzyIHDl6phNHAky/0krkRrqsNyLN7Na/VksIycOT1stAemG2FloEQR5aH6/RDg9US5sWe12MHo4OL91lg4bVtUtX3vXwS9AxkZ48Oga2lRUDYv/lrdInKy4zne4T67hY35kDEsQDX2oHhHcq6RfGIcE6N6wSJG8AAxR0bP7vGm5ummAj4z+XPRxO5mGa7qNJWOMytGsEiEyZ4ythiceblCjpbMTTc=) 2025-09-13 00:24:08.076780 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOaMrUlGQw/Jfkg+84tbtq6W8YPzn1D69BXnuOTr1i3h) 2025-09-13 00:24:08.076797 | orchestrator | 2025-09-13 00:24:08.076816 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-13 00:24:08.076834 | orchestrator | Saturday 13 September 2025 00:24:06 +0000 (0:00:01.997) 0:00:21.917 **** 2025-09-13 00:24:08.076867 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCw4Tltx/EtygrP6KYi4y1a2GNbma2Z0FCFqm9UibceB6N8DZ0k53N4Ps93s94I2hoSKImZdzAdl3xWFFo+qWrOwDmH9NB/IGkqdj2gR85aSJcSa0z9bhT1mFMn15J6epkavO0WmFLvhM0N6BbPFmh/7FWVGJ9NmwSpkrRaOrRSRQEKZa7thNLzJhri2O9ZlGPZmAEKDmWQ5pmZslKGwO2+QS1ZlRO2zLp1coRk2Ea+H+Oj2uyU6AS/aKDdjZlqWqMShEkOnayu0rjSmUjmT4CqchnidJirqDGtepvVKEGqXUxLaS5JgzLP7JmPhLJ5/fT6ka0gmwsZnFj/jUSxU/mNQlISj5Ix5YwC7JLu4Jlm5VMr9zXvNQg4RniyCoH6xwN0VP92hWoIv7ajxA/ex1JKS8RkonZACDf/YgRP+vqQZKShKdsGy6iPx9NNuiMQwl1FnL+40YU0zjA8/TMQ42nJBc33o4JDMUVRqRlSthm8I0OOJk/LB4HJ9xgd7oRX05s=) 2025-09-13 00:24:08.076887 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHJCl/kjVg+/d+TKZDygDCyn5+ge0TykNUiyWz2+A0w58eGnSeiDpVW+zcwoEQk6/4+IxzckHeGxFYKS8IvGCIE=) 2025-09-13 00:24:08.076905 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILH1yzxJtkNZjpxkgPJXPg6sKTFndAt7CFade/DADw+H) 2025-09-13 00:24:08.076922 | orchestrator | 2025-09-13 00:24:08.076939 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-13 00:24:08.076957 | orchestrator | Saturday 13 September 2025 00:24:07 +0000 (0:00:00.978) 0:00:22.896 **** 2025-09-13 00:24:08.076985 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC65gmBOiN+RyxI3ap351oTOur/zq1ed7/pdJ5rLmECohs4txoYnLA8N221CL3QFoFzS+CVj+On8OaoenCr/KtcOsDrJibfXHjHbi2rOQNTwZ6Aobm9UcPoQoBA0yRivrG5EfG+3NqU2CjSkDVSz8BrrGxhW3sQtcls0JlabQhCGc+/88zLnEMSIG4z6sgkjHJaKBLy5PKkhVJlny04XUgiEFZq+Vsx6oKnj2JbRe5+EZIptOTRN3mzG5N3U8bhDn4qlGObC6sFrL6XOccoAlj3FcA2YvByPW06lnORm2K5a3+kVA5cf30GcacJFvl3ipVwG3hTx3qr68hQ6mZwADboXQC2XNf6I3d/frkmvvL4nVyaGxEFaTx+Nb/ejSEcqluH2IMBysPcNN4jqZH/BLSYGZoGesplvaD8+5eAtBPdci+zhJNshYjyuX89miFosG65X+oXTvhmgZKbwvEo3C+04unLmAjDC2BN3Z0Glb2HcR0Gi0vhoggp6m/1emV+/wM=) 2025-09-13 00:24:08.077005 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHpVVc4y6YDc7Emkc6EjpKpvqmJ0x2BJBxMOVcJnZ6BF2+F9BIb0qtt0ocr4Y75XVX2MXRnbuxHcWD9rPt7XRmU=) 2025-09-13 00:24:08.077043 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJs+J2L2IESNobypCjNU9f1BYeAKeE1HWm16BMYIZySr) 2025-09-13 00:24:11.829970 | orchestrator | 2025-09-13 00:24:11.830131 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-13 00:24:11.830151 | orchestrator | Saturday 13 September 2025 00:24:08 +0000 (0:00:00.996) 0:00:23.892 **** 2025-09-13 00:24:11.830165 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPYjb3uzBS3lduJUAMWfJOsiJ36wcnBhxi02CFn5xw2xXUIOuPp86LSHCBMyZW4hQPxiEtqhGTwWJL1h/nQAHHg=) 2025-09-13 00:24:11.830229 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZCToWNCQ53OBXXsW8FYbsTyQDrkmrLIZge9SXAf5wQXT6TUGTHP+PsgDDFl+crKkhqajBfYTvg4o9aAMFWWmQE3z4SkcAP5Qh1mmmygY8yLG6peM+TIqiLA3yiHcA2PIcV7iU2PNe7PndAirJaC6Z+R38CqVKY+PGvn5FbBB/kU33vlj3TeDdSPo7weRdnMtSrbN/FcaaGiphxozDVbQFpNrfmj4XK+d3KR2eeaNXp81J5AAapJOk2v8x/lznFpSS1Mf0rxHWhW5H/HYOO/ZvgW7QnYZKYy6qOlUcsOKGgVplO+8zRhiDUg5IQewKmqu29Nss2r5wn4rNI6WND3Srr3D59oqzfr4j66JqViJ4UgpHHvLnW17KHhbaLfTyxHGJkAWCQA8tyd5b95v7M3kTcQA66jNwt1Dj+rSZwP0zGLKExNk61eGcGYT6nYmqDO2nSpY2oExJ8SM2X23+3Mvo+hBsqZnrrKrAggXLqmueI67Ml95D7angW42RixxdnvM=) 2025-09-13 00:24:11.830245 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAOmnlfEP4Tfzi0jGpcSy2ljrYq43Sx7SijVkqWfMSJi) 2025-09-13 00:24:11.830258 | orchestrator | 2025-09-13 00:24:11.830269 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-13 00:24:11.830280 | orchestrator | Saturday 13 September 2025 00:24:09 +0000 (0:00:00.966) 0:00:24.859 **** 2025-09-13 00:24:11.830292 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNokz/w+oLs8xsLGyXRxOwoAxbI2byJktTrvXp9SAmsj6AFB7hNVv0mAhaN1BkSz8CIYDDMhuxNkNfUB35SPtZM=) 2025-09-13 00:24:11.830329 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCn2RCQju5eTyEsL997PF9Vg1sWTvTdai4eMUnLM4lhtABHgDFwcBGFt5cobSFt/R/KvaoKQPQvtkwtRXAd32n986SnMHIXdOZNcCk8eLz8Bx3imA/k/CggeTefHzXnwx1ry0Hw9qJgSAy5Ht8sZrqCQcssEUbYpy4uKCVCgB3yVPGEQRNboaNKzId+B+6GyYn4dJhPff1+y/sf6Hao7FS0NJa1QxEQ2+unQTH4cB/vGg/woNXOImx7cxO1CO83kBaPsgVhSFK00stcQR20j1UqWNTO3M11LQr32kWu8jxQs12vLJ2LKs5AyUwz6TLX8XH3JeKaC/ZYc3vr2L40g++A2BWdXA8cDK5cdO2zrfVcahHRHvxmSZDay1xDetg4FI4a3ntszU1MBSJf1qwlemzfUc8CmsM1MpVGm08IYFxIMDInCh1gzMNjmWgnjeA7BwQwuyHIgtx9mCwef779xRBlrVGZ+CmCJEjIMOZMHZmEAfceYXVPOoww692Zwmsd0KE=) 2025-09-13 00:24:11.830341 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAH1JHmJXfgK6h1AfB3b0K/fA++/xfcQCQKTSM/rePPn) 2025-09-13 00:24:11.830352 | orchestrator | 2025-09-13 00:24:11.830363 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-13 00:24:11.830374 | orchestrator | Saturday 13 September 2025 00:24:09 +0000 (0:00:00.921) 0:00:25.781 **** 2025-09-13 00:24:11.830385 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE33J6Gz/Rnv/F9vR8TBAaxtyRkGS5q9WsNkod8vXKjM) 2025-09-13 00:24:11.830397 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAsxgESf5pePrRkAUO81n9lrKwPj39TYegPvDHXaUuhs2q7bVVjE8fgeLGfHm+iCK794jaZYQK2HLwclS5lw/dwZ0lycDo24hvTeLNTUlFAKvJixbsSdAgqe7I0YKabgwwMSIO0X1nZ3lOW1kn68wx1y9FStZGVTZ0owbeJlJX4Qu7PLtxDUcbm1bKxK0KhoeZY92RWzqvcttPyfQ8GfLPbEg5zNq06wt7+Wntr5/S8ojf7BGNcNKLwGDNVZbHB/xhFZSvncvW45SDDj7VKpBK5Lt6552sBfUJUz9nl6DUI/d07FWFCraeF9fSwKYA+eUDyEZ/fPtCksexnqZfpWD0MHNcm1IXbT5QGL4yPG3oDFZsTjjIgnFQV2Wc7CH2lXAEolblKDzCovgvKk/b406n2U3MYSBnYt3ZTIfgiC1bZ5OsWqUOBqBmi4gAABoCNuFuOsbEbsTt3sVAS6Uw9eQDtEBAt2s7Ef1zYgslIE2HdzI2YiORRSQnFCPC1rbSUqE=) 2025-09-13 00:24:11.830409 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKRtqlrfJTWkBqPYYRs0sYbaD+S2TD+Frcn4kjcX+i0veYaOQthxDrrVkXfvx5n84wUDI+G1eMBzt5IYc84AGiM=) 2025-09-13 00:24:11.830421 | orchestrator | 2025-09-13 00:24:11.830432 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-09-13 00:24:11.830443 | orchestrator | Saturday 13 September 2025 00:24:10 +0000 (0:00:00.966) 0:00:26.747 **** 2025-09-13 00:24:11.830455 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-13 00:24:11.830466 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-13 00:24:11.830477 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-13 00:24:11.830488 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-13 00:24:11.830499 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-13 00:24:11.830511 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-13 00:24:11.830538 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-13 00:24:11.830562 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:24:11.830575 | orchestrator | 2025-09-13 00:24:11.830605 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-09-13 00:24:11.830619 | orchestrator | Saturday 13 September 2025 00:24:11 +0000 (0:00:00.149) 0:00:26.897 **** 2025-09-13 00:24:11.830631 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:24:11.830643 | orchestrator | 2025-09-13 00:24:11.830656 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-09-13 00:24:11.830669 | orchestrator | Saturday 13 September 2025 00:24:11 +0000 (0:00:00.067) 0:00:26.965 **** 2025-09-13 00:24:11.830681 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:24:11.830693 | orchestrator | 2025-09-13 00:24:11.830705 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-09-13 00:24:11.830717 | orchestrator | Saturday 13 September 2025 00:24:11 +0000 (0:00:00.050) 0:00:27.016 **** 2025-09-13 00:24:11.830739 | orchestrator | changed: [testbed-manager] 2025-09-13 00:24:11.830751 | orchestrator | 2025-09-13 00:24:11.830762 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:24:11.830773 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-13 00:24:11.830785 | orchestrator | 2025-09-13 00:24:11.830796 | orchestrator | 2025-09-13 00:24:11.830807 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:24:11.830818 | orchestrator | Saturday 13 September 2025 00:24:11 +0000 (0:00:00.461) 0:00:27.477 **** 2025-09-13 00:24:11.830829 | orchestrator | =============================================================================== 2025-09-13 00:24:11.830840 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.97s 2025-09-13 00:24:11.830851 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.12s 2025-09-13 00:24:11.830879 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 2.00s 2025-09-13 00:24:11.830891 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-09-13 00:24:11.830902 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-09-13 00:24:11.830913 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-09-13 00:24:11.830924 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-09-13 00:24:11.830934 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-09-13 00:24:11.830945 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-09-13 00:24:11.830956 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-09-13 00:24:11.830967 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2025-09-13 00:24:11.830978 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2025-09-13 00:24:11.830989 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2025-09-13 00:24:11.831000 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2025-09-13 00:24:11.831010 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.94s 2025-09-13 00:24:11.831021 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.92s 2025-09-13 00:24:11.831032 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.46s 2025-09-13 00:24:11.831043 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2025-09-13 00:24:11.831054 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.15s 2025-09-13 00:24:11.831070 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.15s 2025-09-13 00:24:12.024229 | orchestrator | + osism apply squid 2025-09-13 00:24:23.834947 | orchestrator | 2025-09-13 00:24:23 | INFO  | Task 950c93ef-dbae-42dd-9188-1422a0fbefa9 (squid) was prepared for execution. 2025-09-13 00:24:23.835062 | orchestrator | 2025-09-13 00:24:23 | INFO  | It takes a moment until task 950c93ef-dbae-42dd-9188-1422a0fbefa9 (squid) has been started and output is visible here. 2025-09-13 00:26:17.051831 | orchestrator | 2025-09-13 00:26:17.051944 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-09-13 00:26:17.051959 | orchestrator | 2025-09-13 00:26:17.051970 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-09-13 00:26:17.051981 | orchestrator | Saturday 13 September 2025 00:24:27 +0000 (0:00:00.166) 0:00:00.166 **** 2025-09-13 00:26:17.051992 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-09-13 00:26:17.052003 | orchestrator | 2025-09-13 00:26:17.052013 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-09-13 00:26:17.052049 | orchestrator | Saturday 13 September 2025 00:24:27 +0000 (0:00:00.118) 0:00:00.284 **** 2025-09-13 00:26:17.052060 | orchestrator | ok: [testbed-manager] 2025-09-13 00:26:17.052071 | orchestrator | 2025-09-13 00:26:17.052080 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-09-13 00:26:17.052090 | orchestrator | Saturday 13 September 2025 00:24:29 +0000 (0:00:01.594) 0:00:01.879 **** 2025-09-13 00:26:17.052100 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-09-13 00:26:17.052109 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-09-13 00:26:17.052119 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-09-13 00:26:17.052129 | orchestrator | 2025-09-13 00:26:17.052138 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-09-13 00:26:17.052148 | orchestrator | Saturday 13 September 2025 00:24:30 +0000 (0:00:01.110) 0:00:02.990 **** 2025-09-13 00:26:17.052157 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-09-13 00:26:17.052199 | orchestrator | 2025-09-13 00:26:17.052209 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-09-13 00:26:17.052219 | orchestrator | Saturday 13 September 2025 00:24:31 +0000 (0:00:00.976) 0:00:03.967 **** 2025-09-13 00:26:17.052228 | orchestrator | ok: [testbed-manager] 2025-09-13 00:26:17.052238 | orchestrator | 2025-09-13 00:26:17.052248 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-09-13 00:26:17.052257 | orchestrator | Saturday 13 September 2025 00:24:31 +0000 (0:00:00.324) 0:00:04.291 **** 2025-09-13 00:26:17.052267 | orchestrator | changed: [testbed-manager] 2025-09-13 00:26:17.052276 | orchestrator | 2025-09-13 00:26:17.052286 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-09-13 00:26:17.052296 | orchestrator | Saturday 13 September 2025 00:24:32 +0000 (0:00:00.832) 0:00:05.123 **** 2025-09-13 00:26:17.052305 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-09-13 00:26:17.052316 | orchestrator | ok: [testbed-manager] 2025-09-13 00:26:17.052325 | orchestrator | 2025-09-13 00:26:17.052335 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-09-13 00:26:17.052344 | orchestrator | Saturday 13 September 2025 00:25:03 +0000 (0:00:31.366) 0:00:36.489 **** 2025-09-13 00:26:17.052354 | orchestrator | changed: [testbed-manager] 2025-09-13 00:26:17.052364 | orchestrator | 2025-09-13 00:26:17.052373 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-09-13 00:26:17.052385 | orchestrator | Saturday 13 September 2025 00:25:15 +0000 (0:00:12.098) 0:00:48.588 **** 2025-09-13 00:26:17.052397 | orchestrator | Pausing for 60 seconds 2025-09-13 00:26:17.052409 | orchestrator | changed: [testbed-manager] 2025-09-13 00:26:17.052420 | orchestrator | 2025-09-13 00:26:17.052431 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-09-13 00:26:17.052443 | orchestrator | Saturday 13 September 2025 00:26:16 +0000 (0:01:00.076) 0:01:48.665 **** 2025-09-13 00:26:17.052454 | orchestrator | ok: [testbed-manager] 2025-09-13 00:26:17.052465 | orchestrator | 2025-09-13 00:26:17.052476 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-09-13 00:26:17.052487 | orchestrator | Saturday 13 September 2025 00:26:16 +0000 (0:00:00.071) 0:01:48.736 **** 2025-09-13 00:26:17.052498 | orchestrator | changed: [testbed-manager] 2025-09-13 00:26:17.052509 | orchestrator | 2025-09-13 00:26:17.052520 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:26:17.052531 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:26:17.052543 | orchestrator | 2025-09-13 00:26:17.052554 | orchestrator | 2025-09-13 00:26:17.052564 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:26:17.052575 | orchestrator | Saturday 13 September 2025 00:26:16 +0000 (0:00:00.681) 0:01:49.417 **** 2025-09-13 00:26:17.052594 | orchestrator | =============================================================================== 2025-09-13 00:26:17.052605 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-09-13 00:26:17.052616 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.37s 2025-09-13 00:26:17.052627 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.10s 2025-09-13 00:26:17.052638 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.59s 2025-09-13 00:26:17.052649 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.11s 2025-09-13 00:26:17.052659 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.98s 2025-09-13 00:26:17.052671 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.83s 2025-09-13 00:26:17.052682 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.68s 2025-09-13 00:26:17.052693 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.32s 2025-09-13 00:26:17.052704 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.12s 2025-09-13 00:26:17.052716 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-09-13 00:26:17.358230 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-13 00:26:17.358457 | orchestrator | ++ semver latest 9.0.0 2025-09-13 00:26:17.412845 | orchestrator | + [[ -1 -lt 0 ]] 2025-09-13 00:26:17.412906 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-13 00:26:17.413249 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-09-13 00:26:29.486104 | orchestrator | 2025-09-13 00:26:29 | INFO  | Task a38e2703-5faa-466b-81cd-d41c2ed16442 (operator) was prepared for execution. 2025-09-13 00:26:29.486265 | orchestrator | 2025-09-13 00:26:29 | INFO  | It takes a moment until task a38e2703-5faa-466b-81cd-d41c2ed16442 (operator) has been started and output is visible here. 2025-09-13 00:26:46.457207 | orchestrator | 2025-09-13 00:26:46.457306 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-09-13 00:26:46.457318 | orchestrator | 2025-09-13 00:26:46.457326 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-13 00:26:46.457333 | orchestrator | Saturday 13 September 2025 00:26:33 +0000 (0:00:00.155) 0:00:00.155 **** 2025-09-13 00:26:46.457355 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:26:46.457363 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:26:46.457371 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:26:46.457377 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:26:46.457384 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:26:46.457391 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:26:46.457398 | orchestrator | 2025-09-13 00:26:46.457405 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-09-13 00:26:46.457412 | orchestrator | Saturday 13 September 2025 00:26:38 +0000 (0:00:04.704) 0:00:04.859 **** 2025-09-13 00:26:46.457419 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:26:46.457426 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:26:46.457433 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:26:46.457439 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:26:46.457446 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:26:46.457453 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:26:46.457459 | orchestrator | 2025-09-13 00:26:46.457466 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-09-13 00:26:46.457473 | orchestrator | 2025-09-13 00:26:46.457480 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-13 00:26:46.457487 | orchestrator | Saturday 13 September 2025 00:26:38 +0000 (0:00:00.728) 0:00:05.587 **** 2025-09-13 00:26:46.457493 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:26:46.457500 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:26:46.457507 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:26:46.457513 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:26:46.457520 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:26:46.457527 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:26:46.457551 | orchestrator | 2025-09-13 00:26:46.457559 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-13 00:26:46.457565 | orchestrator | Saturday 13 September 2025 00:26:39 +0000 (0:00:00.187) 0:00:05.774 **** 2025-09-13 00:26:46.457572 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:26:46.457578 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:26:46.457585 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:26:46.457591 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:26:46.457598 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:26:46.457604 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:26:46.457611 | orchestrator | 2025-09-13 00:26:46.457618 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-13 00:26:46.457624 | orchestrator | Saturday 13 September 2025 00:26:39 +0000 (0:00:00.187) 0:00:05.962 **** 2025-09-13 00:26:46.457631 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:26:46.457638 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:26:46.457645 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:26:46.457651 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:26:46.457658 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:26:46.457665 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:26:46.457672 | orchestrator | 2025-09-13 00:26:46.457678 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-13 00:26:46.457685 | orchestrator | Saturday 13 September 2025 00:26:39 +0000 (0:00:00.576) 0:00:06.538 **** 2025-09-13 00:26:46.457692 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:26:46.457698 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:26:46.457705 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:26:46.457712 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:26:46.457718 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:26:46.457725 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:26:46.457731 | orchestrator | 2025-09-13 00:26:46.457738 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-13 00:26:46.457745 | orchestrator | Saturday 13 September 2025 00:26:40 +0000 (0:00:00.768) 0:00:07.307 **** 2025-09-13 00:26:46.457751 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-09-13 00:26:46.457758 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-09-13 00:26:46.457765 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-09-13 00:26:46.457771 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-09-13 00:26:46.457778 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-09-13 00:26:46.457784 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-09-13 00:26:46.457791 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-09-13 00:26:46.457798 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-09-13 00:26:46.457804 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-09-13 00:26:46.457811 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-09-13 00:26:46.457818 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-09-13 00:26:46.457824 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-09-13 00:26:46.457831 | orchestrator | 2025-09-13 00:26:46.457837 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-13 00:26:46.457848 | orchestrator | Saturday 13 September 2025 00:26:41 +0000 (0:00:01.152) 0:00:08.459 **** 2025-09-13 00:26:46.457855 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:26:46.457861 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:26:46.457868 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:26:46.457877 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:26:46.457888 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:26:46.457899 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:26:46.457910 | orchestrator | 2025-09-13 00:26:46.457924 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-13 00:26:46.457940 | orchestrator | Saturday 13 September 2025 00:26:43 +0000 (0:00:01.242) 0:00:09.702 **** 2025-09-13 00:26:46.457950 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-09-13 00:26:46.457969 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-09-13 00:26:46.457982 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-09-13 00:26:46.457997 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-09-13 00:26:46.458077 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-09-13 00:26:46.458087 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-09-13 00:26:46.458094 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-09-13 00:26:46.458100 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-09-13 00:26:46.458107 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-09-13 00:26:46.458113 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-09-13 00:26:46.458144 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-09-13 00:26:46.458151 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-09-13 00:26:46.458157 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-09-13 00:26:46.458164 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-09-13 00:26:46.458195 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-09-13 00:26:46.458207 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-09-13 00:26:46.458218 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-09-13 00:26:46.458228 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-09-13 00:26:46.458235 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-09-13 00:26:46.458242 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-09-13 00:26:46.458248 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-09-13 00:26:46.458255 | orchestrator | 2025-09-13 00:26:46.458262 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-13 00:26:46.458269 | orchestrator | Saturday 13 September 2025 00:26:44 +0000 (0:00:01.255) 0:00:10.958 **** 2025-09-13 00:26:46.458276 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:26:46.458283 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:26:46.458289 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:26:46.458296 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:26:46.458303 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:26:46.458309 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:26:46.458316 | orchestrator | 2025-09-13 00:26:46.458322 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-13 00:26:46.458329 | orchestrator | Saturday 13 September 2025 00:26:44 +0000 (0:00:00.149) 0:00:11.107 **** 2025-09-13 00:26:46.458335 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:26:46.458342 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:26:46.458348 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:26:46.458355 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:26:46.458362 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:26:46.458368 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:26:46.458375 | orchestrator | 2025-09-13 00:26:46.458381 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-13 00:26:46.458388 | orchestrator | Saturday 13 September 2025 00:26:45 +0000 (0:00:00.610) 0:00:11.718 **** 2025-09-13 00:26:46.458395 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:26:46.458401 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:26:46.458408 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:26:46.458414 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:26:46.458421 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:26:46.458427 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:26:46.458434 | orchestrator | 2025-09-13 00:26:46.458459 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-13 00:26:46.458465 | orchestrator | Saturday 13 September 2025 00:26:45 +0000 (0:00:00.170) 0:00:11.888 **** 2025-09-13 00:26:46.458472 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-13 00:26:46.458482 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-13 00:26:46.458489 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:26:46.458495 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:26:46.458502 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-13 00:26:46.458508 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:26:46.458515 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-13 00:26:46.458522 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:26:46.458528 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-13 00:26:46.458535 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:26:46.458541 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-13 00:26:46.458548 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:26:46.458554 | orchestrator | 2025-09-13 00:26:46.458561 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-13 00:26:46.458567 | orchestrator | Saturday 13 September 2025 00:26:45 +0000 (0:00:00.698) 0:00:12.587 **** 2025-09-13 00:26:46.458574 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:26:46.458580 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:26:46.458587 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:26:46.458594 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:26:46.458600 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:26:46.458607 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:26:46.458613 | orchestrator | 2025-09-13 00:26:46.458620 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-13 00:26:46.458632 | orchestrator | Saturday 13 September 2025 00:26:46 +0000 (0:00:00.218) 0:00:12.805 **** 2025-09-13 00:26:46.458639 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:26:46.458646 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:26:46.458652 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:26:46.458659 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:26:46.458665 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:26:46.458672 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:26:46.458678 | orchestrator | 2025-09-13 00:26:46.458685 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-13 00:26:46.458692 | orchestrator | Saturday 13 September 2025 00:26:46 +0000 (0:00:00.164) 0:00:12.970 **** 2025-09-13 00:26:46.458698 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:26:46.458705 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:26:46.458712 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:26:46.458718 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:26:46.458731 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:26:47.525383 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:26:47.525488 | orchestrator | 2025-09-13 00:26:47.525504 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-13 00:26:47.525517 | orchestrator | Saturday 13 September 2025 00:26:46 +0000 (0:00:00.158) 0:00:13.129 **** 2025-09-13 00:26:47.525529 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:26:47.525540 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:26:47.525550 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:26:47.525561 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:26:47.525572 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:26:47.525583 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:26:47.525594 | orchestrator | 2025-09-13 00:26:47.525605 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-13 00:26:47.525616 | orchestrator | Saturday 13 September 2025 00:26:47 +0000 (0:00:00.638) 0:00:13.768 **** 2025-09-13 00:26:47.525627 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:26:47.525637 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:26:47.525648 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:26:47.525684 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:26:47.525695 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:26:47.525706 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:26:47.525716 | orchestrator | 2025-09-13 00:26:47.525727 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:26:47.525739 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-13 00:26:47.525752 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-13 00:26:47.525762 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-13 00:26:47.525773 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-13 00:26:47.525784 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-13 00:26:47.525794 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-13 00:26:47.525805 | orchestrator | 2025-09-13 00:26:47.525816 | orchestrator | 2025-09-13 00:26:47.525826 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:26:47.525837 | orchestrator | Saturday 13 September 2025 00:26:47 +0000 (0:00:00.217) 0:00:13.985 **** 2025-09-13 00:26:47.525848 | orchestrator | =============================================================================== 2025-09-13 00:26:47.525859 | orchestrator | Gathering Facts --------------------------------------------------------- 4.70s 2025-09-13 00:26:47.525869 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.26s 2025-09-13 00:26:47.525880 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.24s 2025-09-13 00:26:47.525891 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.15s 2025-09-13 00:26:47.525902 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.77s 2025-09-13 00:26:47.525912 | orchestrator | Do not require tty for all users ---------------------------------------- 0.73s 2025-09-13 00:26:47.525928 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.70s 2025-09-13 00:26:47.525948 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.64s 2025-09-13 00:26:47.525968 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.61s 2025-09-13 00:26:47.525989 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.58s 2025-09-13 00:26:47.526011 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.22s 2025-09-13 00:26:47.526114 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.22s 2025-09-13 00:26:47.526129 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.19s 2025-09-13 00:26:47.526142 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.19s 2025-09-13 00:26:47.526188 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.17s 2025-09-13 00:26:47.526202 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.17s 2025-09-13 00:26:47.526213 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2025-09-13 00:26:47.526226 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.15s 2025-09-13 00:26:47.803463 | orchestrator | + osism apply --environment custom facts 2025-09-13 00:26:49.513216 | orchestrator | 2025-09-13 00:26:49 | INFO  | Trying to run play facts in environment custom 2025-09-13 00:26:59.650233 | orchestrator | 2025-09-13 00:26:59 | INFO  | Task 15b4403e-aca5-4246-a54b-01f310c9c392 (facts) was prepared for execution. 2025-09-13 00:26:59.650351 | orchestrator | 2025-09-13 00:26:59 | INFO  | It takes a moment until task 15b4403e-aca5-4246-a54b-01f310c9c392 (facts) has been started and output is visible here. 2025-09-13 00:27:45.399657 | orchestrator | 2025-09-13 00:27:45.399760 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-09-13 00:27:45.399772 | orchestrator | 2025-09-13 00:27:45.399781 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-13 00:27:45.399790 | orchestrator | Saturday 13 September 2025 00:27:03 +0000 (0:00:00.088) 0:00:00.088 **** 2025-09-13 00:27:45.399798 | orchestrator | ok: [testbed-manager] 2025-09-13 00:27:45.399808 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:27:45.399816 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:27:45.399824 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:27:45.399832 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:27:45.399840 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:27:45.399847 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:27:45.399855 | orchestrator | 2025-09-13 00:27:45.399863 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-09-13 00:27:45.399871 | orchestrator | Saturday 13 September 2025 00:27:04 +0000 (0:00:01.483) 0:00:01.572 **** 2025-09-13 00:27:45.399879 | orchestrator | ok: [testbed-manager] 2025-09-13 00:27:45.399888 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:27:45.399895 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:27:45.399903 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:27:45.399911 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:27:45.399919 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:27:45.399927 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:27:45.399935 | orchestrator | 2025-09-13 00:27:45.399943 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-09-13 00:27:45.399951 | orchestrator | 2025-09-13 00:27:45.399958 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-13 00:27:45.399967 | orchestrator | Saturday 13 September 2025 00:27:06 +0000 (0:00:01.179) 0:00:02.751 **** 2025-09-13 00:27:45.399975 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:27:45.399983 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:27:45.399991 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:27:45.399998 | orchestrator | 2025-09-13 00:27:45.400007 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-13 00:27:45.400015 | orchestrator | Saturday 13 September 2025 00:27:06 +0000 (0:00:00.104) 0:00:02.856 **** 2025-09-13 00:27:45.400023 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:27:45.400031 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:27:45.400039 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:27:45.400047 | orchestrator | 2025-09-13 00:27:45.400055 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-13 00:27:45.400063 | orchestrator | Saturday 13 September 2025 00:27:06 +0000 (0:00:00.231) 0:00:03.088 **** 2025-09-13 00:27:45.400071 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:27:45.400079 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:27:45.400087 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:27:45.400095 | orchestrator | 2025-09-13 00:27:45.400103 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-13 00:27:45.400111 | orchestrator | Saturday 13 September 2025 00:27:06 +0000 (0:00:00.211) 0:00:03.299 **** 2025-09-13 00:27:45.400120 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:27:45.400129 | orchestrator | 2025-09-13 00:27:45.400138 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-13 00:27:45.400146 | orchestrator | Saturday 13 September 2025 00:27:06 +0000 (0:00:00.154) 0:00:03.454 **** 2025-09-13 00:27:45.400173 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:27:45.400223 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:27:45.400233 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:27:45.400242 | orchestrator | 2025-09-13 00:27:45.400251 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-13 00:27:45.400260 | orchestrator | Saturday 13 September 2025 00:27:07 +0000 (0:00:00.431) 0:00:03.885 **** 2025-09-13 00:27:45.400269 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:27:45.400278 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:27:45.400287 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:27:45.400296 | orchestrator | 2025-09-13 00:27:45.400305 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-13 00:27:45.400314 | orchestrator | Saturday 13 September 2025 00:27:07 +0000 (0:00:00.111) 0:00:03.996 **** 2025-09-13 00:27:45.400323 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:27:45.400332 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:27:45.400341 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:27:45.400350 | orchestrator | 2025-09-13 00:27:45.400359 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-13 00:27:45.400368 | orchestrator | Saturday 13 September 2025 00:27:08 +0000 (0:00:01.026) 0:00:05.023 **** 2025-09-13 00:27:45.400377 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:27:45.400386 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:27:45.400395 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:27:45.400403 | orchestrator | 2025-09-13 00:27:45.400413 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-13 00:27:45.400422 | orchestrator | Saturday 13 September 2025 00:27:08 +0000 (0:00:00.447) 0:00:05.470 **** 2025-09-13 00:27:45.400432 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:27:45.400441 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:27:45.400450 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:27:45.400459 | orchestrator | 2025-09-13 00:27:45.400468 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-13 00:27:45.400477 | orchestrator | Saturday 13 September 2025 00:27:09 +0000 (0:00:01.038) 0:00:06.508 **** 2025-09-13 00:27:45.400502 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:27:45.400512 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:27:45.400521 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:27:45.400531 | orchestrator | 2025-09-13 00:27:45.400540 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-09-13 00:27:45.400549 | orchestrator | Saturday 13 September 2025 00:27:27 +0000 (0:00:17.248) 0:00:23.756 **** 2025-09-13 00:27:45.400558 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:27:45.400567 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:27:45.400575 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:27:45.400582 | orchestrator | 2025-09-13 00:27:45.400590 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-09-13 00:27:45.400613 | orchestrator | Saturday 13 September 2025 00:27:27 +0000 (0:00:00.107) 0:00:23.864 **** 2025-09-13 00:27:45.400622 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:27:45.400630 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:27:45.400638 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:27:45.400646 | orchestrator | 2025-09-13 00:27:45.400653 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-13 00:27:45.400661 | orchestrator | Saturday 13 September 2025 00:27:35 +0000 (0:00:08.445) 0:00:32.310 **** 2025-09-13 00:27:45.400669 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:27:45.400677 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:27:45.400685 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:27:45.400692 | orchestrator | 2025-09-13 00:27:45.400700 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-13 00:27:45.400708 | orchestrator | Saturday 13 September 2025 00:27:36 +0000 (0:00:00.501) 0:00:32.812 **** 2025-09-13 00:27:45.400716 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-09-13 00:27:45.400731 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-09-13 00:27:45.400739 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-09-13 00:27:45.400747 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-09-13 00:27:45.400754 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-09-13 00:27:45.400762 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-09-13 00:27:45.400770 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-09-13 00:27:45.400778 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-09-13 00:27:45.400786 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-09-13 00:27:45.400794 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-09-13 00:27:45.400802 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-09-13 00:27:45.400809 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-09-13 00:27:45.400817 | orchestrator | 2025-09-13 00:27:45.400825 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-13 00:27:45.400833 | orchestrator | Saturday 13 September 2025 00:27:39 +0000 (0:00:03.626) 0:00:36.438 **** 2025-09-13 00:27:45.400841 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:27:45.400849 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:27:45.400857 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:27:45.400864 | orchestrator | 2025-09-13 00:27:45.400872 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-13 00:27:45.400880 | orchestrator | 2025-09-13 00:27:45.400888 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-13 00:27:45.400896 | orchestrator | Saturday 13 September 2025 00:27:41 +0000 (0:00:01.327) 0:00:37.765 **** 2025-09-13 00:27:45.400904 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:27:45.400912 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:27:45.400920 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:27:45.400928 | orchestrator | ok: [testbed-manager] 2025-09-13 00:27:45.400936 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:27:45.400943 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:27:45.400951 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:27:45.400959 | orchestrator | 2025-09-13 00:27:45.400967 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:27:45.400976 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:27:45.400984 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:27:45.400993 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:27:45.401001 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:27:45.401009 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-13 00:27:45.401018 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-13 00:27:45.401029 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-13 00:27:45.401037 | orchestrator | 2025-09-13 00:27:45.401046 | orchestrator | 2025-09-13 00:27:45.401053 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:27:45.401061 | orchestrator | Saturday 13 September 2025 00:27:45 +0000 (0:00:04.304) 0:00:42.070 **** 2025-09-13 00:27:45.401069 | orchestrator | =============================================================================== 2025-09-13 00:27:45.401082 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.25s 2025-09-13 00:27:45.401090 | orchestrator | Install required packages (Debian) -------------------------------------- 8.45s 2025-09-13 00:27:45.401098 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.30s 2025-09-13 00:27:45.401106 | orchestrator | Copy fact files --------------------------------------------------------- 3.63s 2025-09-13 00:27:45.401114 | orchestrator | Create custom facts directory ------------------------------------------- 1.48s 2025-09-13 00:27:45.401122 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.33s 2025-09-13 00:27:45.401134 | orchestrator | Copy fact file ---------------------------------------------------------- 1.18s 2025-09-13 00:27:45.640366 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.04s 2025-09-13 00:27:45.640396 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.03s 2025-09-13 00:27:45.640404 | orchestrator | Create custom facts directory ------------------------------------------- 0.50s 2025-09-13 00:27:45.640413 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.45s 2025-09-13 00:27:45.640420 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.43s 2025-09-13 00:27:45.640428 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.23s 2025-09-13 00:27:45.640436 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.21s 2025-09-13 00:27:45.640443 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2025-09-13 00:27:45.640452 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2025-09-13 00:27:45.640459 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2025-09-13 00:27:45.640467 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2025-09-13 00:27:45.953868 | orchestrator | + osism apply bootstrap 2025-09-13 00:27:58.294969 | orchestrator | 2025-09-13 00:27:58 | INFO  | Task be51d60d-8ae3-4e18-809a-61fe996cdc28 (bootstrap) was prepared for execution. 2025-09-13 00:27:58.295082 | orchestrator | 2025-09-13 00:27:58 | INFO  | It takes a moment until task be51d60d-8ae3-4e18-809a-61fe996cdc28 (bootstrap) has been started and output is visible here. 2025-09-13 00:28:14.635878 | orchestrator | 2025-09-13 00:28:14.635999 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-09-13 00:28:14.636016 | orchestrator | 2025-09-13 00:28:14.636028 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-09-13 00:28:14.636040 | orchestrator | Saturday 13 September 2025 00:28:02 +0000 (0:00:00.164) 0:00:00.164 **** 2025-09-13 00:28:14.636051 | orchestrator | ok: [testbed-manager] 2025-09-13 00:28:14.636063 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:28:14.636074 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:28:14.636085 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:28:14.636096 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:28:14.636107 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:28:14.636118 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:28:14.636128 | orchestrator | 2025-09-13 00:28:14.636139 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-13 00:28:14.636150 | orchestrator | 2025-09-13 00:28:14.636161 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-13 00:28:14.636172 | orchestrator | Saturday 13 September 2025 00:28:02 +0000 (0:00:00.271) 0:00:00.436 **** 2025-09-13 00:28:14.636246 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:28:14.636257 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:28:14.636268 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:28:14.636279 | orchestrator | ok: [testbed-manager] 2025-09-13 00:28:14.636290 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:28:14.636301 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:28:14.636311 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:28:14.636347 | orchestrator | 2025-09-13 00:28:14.636358 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-09-13 00:28:14.636369 | orchestrator | 2025-09-13 00:28:14.636380 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-13 00:28:14.636391 | orchestrator | Saturday 13 September 2025 00:28:06 +0000 (0:00:03.726) 0:00:04.162 **** 2025-09-13 00:28:14.636402 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-13 00:28:14.636414 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-09-13 00:28:14.636427 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-13 00:28:14.636439 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-13 00:28:14.636451 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-09-13 00:28:14.636463 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-13 00:28:14.636476 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-13 00:28:14.636488 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-13 00:28:14.636500 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-13 00:28:14.636512 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-09-13 00:28:14.636524 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-13 00:28:14.636537 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-09-13 00:28:14.636550 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-13 00:28:14.636562 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-13 00:28:14.636575 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-09-13 00:28:14.636587 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-13 00:28:14.636599 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-13 00:28:14.636612 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-09-13 00:28:14.636624 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-13 00:28:14.636636 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-09-13 00:28:14.636648 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-09-13 00:28:14.636660 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-13 00:28:14.636672 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-13 00:28:14.636685 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-09-13 00:28:14.636697 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-13 00:28:14.636709 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:28:14.636721 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:28:14.636733 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-09-13 00:28:14.636745 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-13 00:28:14.636758 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:28:14.636771 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-13 00:28:14.636781 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-13 00:28:14.636792 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-13 00:28:14.636802 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-09-13 00:28:14.636813 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-13 00:28:14.636823 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:28:14.636851 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-13 00:28:14.636862 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-09-13 00:28:14.636873 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-09-13 00:28:14.636884 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-13 00:28:14.636894 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-09-13 00:28:14.636913 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-09-13 00:28:14.636924 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-13 00:28:14.636935 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-09-13 00:28:14.636946 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-13 00:28:14.636957 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-13 00:28:14.636967 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:28:14.636996 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-13 00:28:14.637008 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-09-13 00:28:14.637018 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-13 00:28:14.637029 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:28:14.637040 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-09-13 00:28:14.637050 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-13 00:28:14.637061 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-13 00:28:14.637072 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-13 00:28:14.637082 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:28:14.637093 | orchestrator | 2025-09-13 00:28:14.637103 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-09-13 00:28:14.637114 | orchestrator | 2025-09-13 00:28:14.637125 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-09-13 00:28:14.637136 | orchestrator | Saturday 13 September 2025 00:28:06 +0000 (0:00:00.549) 0:00:04.711 **** 2025-09-13 00:28:14.637146 | orchestrator | ok: [testbed-manager] 2025-09-13 00:28:14.637157 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:28:14.637168 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:28:14.637201 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:28:14.637213 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:28:14.637224 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:28:14.637234 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:28:14.637245 | orchestrator | 2025-09-13 00:28:14.637255 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-09-13 00:28:14.637266 | orchestrator | Saturday 13 September 2025 00:28:09 +0000 (0:00:02.213) 0:00:06.924 **** 2025-09-13 00:28:14.637276 | orchestrator | ok: [testbed-manager] 2025-09-13 00:28:14.637287 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:28:14.637297 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:28:14.637308 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:28:14.637318 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:28:14.637328 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:28:14.637339 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:28:14.637349 | orchestrator | 2025-09-13 00:28:14.637360 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-09-13 00:28:14.637371 | orchestrator | Saturday 13 September 2025 00:28:10 +0000 (0:00:01.202) 0:00:08.127 **** 2025-09-13 00:28:14.637382 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:28:14.637396 | orchestrator | 2025-09-13 00:28:14.637407 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-09-13 00:28:14.637417 | orchestrator | Saturday 13 September 2025 00:28:10 +0000 (0:00:00.234) 0:00:08.361 **** 2025-09-13 00:28:14.637428 | orchestrator | changed: [testbed-manager] 2025-09-13 00:28:14.637439 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:28:14.637455 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:28:14.637466 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:28:14.637477 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:28:14.637487 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:28:14.637498 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:28:14.637508 | orchestrator | 2025-09-13 00:28:14.637527 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-09-13 00:28:14.637538 | orchestrator | Saturday 13 September 2025 00:28:12 +0000 (0:00:01.807) 0:00:10.169 **** 2025-09-13 00:28:14.637548 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:28:14.637560 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:28:14.637573 | orchestrator | 2025-09-13 00:28:14.637583 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-09-13 00:28:14.637594 | orchestrator | Saturday 13 September 2025 00:28:12 +0000 (0:00:00.228) 0:00:10.398 **** 2025-09-13 00:28:14.637605 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:28:14.637615 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:28:14.637626 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:28:14.637637 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:28:14.637647 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:28:14.637658 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:28:14.637668 | orchestrator | 2025-09-13 00:28:14.637679 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-09-13 00:28:14.637689 | orchestrator | Saturday 13 September 2025 00:28:13 +0000 (0:00:00.953) 0:00:11.351 **** 2025-09-13 00:28:14.637700 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:28:14.637711 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:28:14.637721 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:28:14.637732 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:28:14.637742 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:28:14.637753 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:28:14.637763 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:28:14.637773 | orchestrator | 2025-09-13 00:28:14.637784 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-09-13 00:28:14.637795 | orchestrator | Saturday 13 September 2025 00:28:14 +0000 (0:00:00.534) 0:00:11.885 **** 2025-09-13 00:28:14.637805 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:28:14.637816 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:28:14.637826 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:28:14.637837 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:28:14.637847 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:28:14.637858 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:28:14.637868 | orchestrator | ok: [testbed-manager] 2025-09-13 00:28:14.637879 | orchestrator | 2025-09-13 00:28:14.637890 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-13 00:28:14.637901 | orchestrator | Saturday 13 September 2025 00:28:14 +0000 (0:00:00.379) 0:00:12.265 **** 2025-09-13 00:28:14.637912 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:28:14.637925 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:28:14.637953 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:28:26.463438 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:28:26.463553 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:28:26.463568 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:28:26.463581 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:28:26.463592 | orchestrator | 2025-09-13 00:28:26.463605 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-13 00:28:26.463619 | orchestrator | Saturday 13 September 2025 00:28:14 +0000 (0:00:00.195) 0:00:12.461 **** 2025-09-13 00:28:26.463632 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:28:26.463662 | orchestrator | 2025-09-13 00:28:26.463673 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-13 00:28:26.463686 | orchestrator | Saturday 13 September 2025 00:28:14 +0000 (0:00:00.226) 0:00:12.688 **** 2025-09-13 00:28:26.463719 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:28:26.463730 | orchestrator | 2025-09-13 00:28:26.463741 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-13 00:28:26.463752 | orchestrator | Saturday 13 September 2025 00:28:15 +0000 (0:00:00.263) 0:00:12.951 **** 2025-09-13 00:28:26.463763 | orchestrator | ok: [testbed-manager] 2025-09-13 00:28:26.463774 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:28:26.463785 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:28:26.463796 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:28:26.463806 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:28:26.463817 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:28:26.463828 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:28:26.463839 | orchestrator | 2025-09-13 00:28:26.463850 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-13 00:28:26.463861 | orchestrator | Saturday 13 September 2025 00:28:16 +0000 (0:00:01.372) 0:00:14.324 **** 2025-09-13 00:28:26.463871 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:28:26.463882 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:28:26.463893 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:28:26.463903 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:28:26.463914 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:28:26.463925 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:28:26.463936 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:28:26.463946 | orchestrator | 2025-09-13 00:28:26.463957 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-13 00:28:26.463968 | orchestrator | Saturday 13 September 2025 00:28:16 +0000 (0:00:00.181) 0:00:14.505 **** 2025-09-13 00:28:26.463979 | orchestrator | ok: [testbed-manager] 2025-09-13 00:28:26.463990 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:28:26.464001 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:28:26.464012 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:28:26.464023 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:28:26.464033 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:28:26.464044 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:28:26.464054 | orchestrator | 2025-09-13 00:28:26.464065 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-13 00:28:26.464076 | orchestrator | Saturday 13 September 2025 00:28:17 +0000 (0:00:00.568) 0:00:15.074 **** 2025-09-13 00:28:26.464087 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:28:26.464098 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:28:26.464109 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:28:26.464120 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:28:26.464130 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:28:26.464141 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:28:26.464152 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:28:26.464162 | orchestrator | 2025-09-13 00:28:26.464173 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-13 00:28:26.464217 | orchestrator | Saturday 13 September 2025 00:28:17 +0000 (0:00:00.231) 0:00:15.305 **** 2025-09-13 00:28:26.464228 | orchestrator | ok: [testbed-manager] 2025-09-13 00:28:26.464239 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:28:26.464250 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:28:26.464261 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:28:26.464271 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:28:26.464282 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:28:26.464293 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:28:26.464303 | orchestrator | 2025-09-13 00:28:26.464314 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-13 00:28:26.464325 | orchestrator | Saturday 13 September 2025 00:28:18 +0000 (0:00:00.603) 0:00:15.909 **** 2025-09-13 00:28:26.464344 | orchestrator | ok: [testbed-manager] 2025-09-13 00:28:26.464355 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:28:26.464366 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:28:26.464377 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:28:26.464387 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:28:26.464398 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:28:26.464409 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:28:26.464419 | orchestrator | 2025-09-13 00:28:26.464430 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-13 00:28:26.464441 | orchestrator | Saturday 13 September 2025 00:28:19 +0000 (0:00:01.028) 0:00:16.937 **** 2025-09-13 00:28:26.464452 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:28:26.464463 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:28:26.464474 | orchestrator | ok: [testbed-manager] 2025-09-13 00:28:26.464484 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:28:26.464495 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:28:26.464506 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:28:26.464516 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:28:26.464527 | orchestrator | 2025-09-13 00:28:26.464538 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-13 00:28:26.464549 | orchestrator | Saturday 13 September 2025 00:28:20 +0000 (0:00:01.145) 0:00:18.083 **** 2025-09-13 00:28:26.464578 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:28:26.464590 | orchestrator | 2025-09-13 00:28:26.464600 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-13 00:28:26.464612 | orchestrator | Saturday 13 September 2025 00:28:20 +0000 (0:00:00.434) 0:00:18.517 **** 2025-09-13 00:28:26.464622 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:28:26.464633 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:28:26.464644 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:28:26.464654 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:28:26.464665 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:28:26.464676 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:28:26.464686 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:28:26.464697 | orchestrator | 2025-09-13 00:28:26.464707 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-13 00:28:26.464718 | orchestrator | Saturday 13 September 2025 00:28:22 +0000 (0:00:01.266) 0:00:19.784 **** 2025-09-13 00:28:26.464729 | orchestrator | ok: [testbed-manager] 2025-09-13 00:28:26.464740 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:28:26.464750 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:28:26.464761 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:28:26.464772 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:28:26.464782 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:28:26.464793 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:28:26.464803 | orchestrator | 2025-09-13 00:28:26.464814 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-13 00:28:26.464825 | orchestrator | Saturday 13 September 2025 00:28:22 +0000 (0:00:00.227) 0:00:20.011 **** 2025-09-13 00:28:26.464836 | orchestrator | ok: [testbed-manager] 2025-09-13 00:28:26.464846 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:28:26.464857 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:28:26.464867 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:28:26.464878 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:28:26.464888 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:28:26.464899 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:28:26.464909 | orchestrator | 2025-09-13 00:28:26.464920 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-13 00:28:26.464931 | orchestrator | Saturday 13 September 2025 00:28:22 +0000 (0:00:00.258) 0:00:20.270 **** 2025-09-13 00:28:26.464941 | orchestrator | ok: [testbed-manager] 2025-09-13 00:28:26.464993 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:28:26.465013 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:28:26.465024 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:28:26.465035 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:28:26.465045 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:28:26.465056 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:28:26.465067 | orchestrator | 2025-09-13 00:28:26.465078 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-13 00:28:26.465089 | orchestrator | Saturday 13 September 2025 00:28:22 +0000 (0:00:00.215) 0:00:20.486 **** 2025-09-13 00:28:26.465106 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:28:26.465119 | orchestrator | 2025-09-13 00:28:26.465130 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-13 00:28:26.465141 | orchestrator | Saturday 13 September 2025 00:28:22 +0000 (0:00:00.285) 0:00:20.772 **** 2025-09-13 00:28:26.465152 | orchestrator | ok: [testbed-manager] 2025-09-13 00:28:26.465162 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:28:26.465173 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:28:26.465208 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:28:26.465219 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:28:26.465229 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:28:26.465240 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:28:26.465250 | orchestrator | 2025-09-13 00:28:26.465261 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-13 00:28:26.465272 | orchestrator | Saturday 13 September 2025 00:28:23 +0000 (0:00:00.552) 0:00:21.325 **** 2025-09-13 00:28:26.465283 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:28:26.465293 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:28:26.465304 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:28:26.465314 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:28:26.465325 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:28:26.465336 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:28:26.465346 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:28:26.465357 | orchestrator | 2025-09-13 00:28:26.465368 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-13 00:28:26.465378 | orchestrator | Saturday 13 September 2025 00:28:23 +0000 (0:00:00.240) 0:00:21.566 **** 2025-09-13 00:28:26.465389 | orchestrator | ok: [testbed-manager] 2025-09-13 00:28:26.465400 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:28:26.465410 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:28:26.465421 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:28:26.465431 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:28:26.465442 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:28:26.465453 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:28:26.465463 | orchestrator | 2025-09-13 00:28:26.465474 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-13 00:28:26.465485 | orchestrator | Saturday 13 September 2025 00:28:24 +0000 (0:00:01.034) 0:00:22.601 **** 2025-09-13 00:28:26.465496 | orchestrator | ok: [testbed-manager] 2025-09-13 00:28:26.465507 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:28:26.465517 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:28:26.465528 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:28:26.465539 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:28:26.465549 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:28:26.465560 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:28:26.465571 | orchestrator | 2025-09-13 00:28:26.465581 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-13 00:28:26.465592 | orchestrator | Saturday 13 September 2025 00:28:25 +0000 (0:00:00.548) 0:00:23.149 **** 2025-09-13 00:28:26.465603 | orchestrator | ok: [testbed-manager] 2025-09-13 00:28:26.465613 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:28:26.465624 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:28:26.465634 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:28:26.465660 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:29:08.518273 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:29:08.518389 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:29:08.518406 | orchestrator | 2025-09-13 00:29:08.518419 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-13 00:29:08.518432 | orchestrator | Saturday 13 September 2025 00:28:26 +0000 (0:00:01.075) 0:00:24.224 **** 2025-09-13 00:29:08.518443 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:29:08.518456 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:29:08.518466 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:29:08.518477 | orchestrator | changed: [testbed-manager] 2025-09-13 00:29:08.518488 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:29:08.518499 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:29:08.518510 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:29:08.518521 | orchestrator | 2025-09-13 00:29:08.518531 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-09-13 00:29:08.518542 | orchestrator | Saturday 13 September 2025 00:28:44 +0000 (0:00:17.641) 0:00:41.865 **** 2025-09-13 00:29:08.518553 | orchestrator | ok: [testbed-manager] 2025-09-13 00:29:08.518564 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:29:08.518575 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:29:08.518586 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:29:08.518596 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:29:08.518607 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:29:08.518618 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:29:08.518628 | orchestrator | 2025-09-13 00:29:08.518639 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-09-13 00:29:08.518650 | orchestrator | Saturday 13 September 2025 00:28:44 +0000 (0:00:00.169) 0:00:42.034 **** 2025-09-13 00:29:08.518661 | orchestrator | ok: [testbed-manager] 2025-09-13 00:29:08.518672 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:29:08.518682 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:29:08.518693 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:29:08.518704 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:29:08.518714 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:29:08.518725 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:29:08.518736 | orchestrator | 2025-09-13 00:29:08.518746 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-09-13 00:29:08.518757 | orchestrator | Saturday 13 September 2025 00:28:44 +0000 (0:00:00.174) 0:00:42.209 **** 2025-09-13 00:29:08.518768 | orchestrator | ok: [testbed-manager] 2025-09-13 00:29:08.518781 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:29:08.518794 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:29:08.518808 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:29:08.518820 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:29:08.518832 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:29:08.518844 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:29:08.518858 | orchestrator | 2025-09-13 00:29:08.518871 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-09-13 00:29:08.518884 | orchestrator | Saturday 13 September 2025 00:28:44 +0000 (0:00:00.193) 0:00:42.403 **** 2025-09-13 00:29:08.518916 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:29:08.518932 | orchestrator | 2025-09-13 00:29:08.518945 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-09-13 00:29:08.518957 | orchestrator | Saturday 13 September 2025 00:28:44 +0000 (0:00:00.261) 0:00:42.664 **** 2025-09-13 00:29:08.518969 | orchestrator | ok: [testbed-manager] 2025-09-13 00:29:08.518983 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:29:08.518995 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:29:08.519008 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:29:08.519021 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:29:08.519034 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:29:08.519071 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:29:08.519085 | orchestrator | 2025-09-13 00:29:08.519098 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-09-13 00:29:08.519111 | orchestrator | Saturday 13 September 2025 00:28:46 +0000 (0:00:01.812) 0:00:44.476 **** 2025-09-13 00:29:08.519124 | orchestrator | changed: [testbed-manager] 2025-09-13 00:29:08.519135 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:29:08.519146 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:29:08.519156 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:29:08.519167 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:29:08.519178 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:29:08.519215 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:29:08.519226 | orchestrator | 2025-09-13 00:29:08.519237 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-09-13 00:29:08.519247 | orchestrator | Saturday 13 September 2025 00:28:47 +0000 (0:00:01.128) 0:00:45.605 **** 2025-09-13 00:29:08.519259 | orchestrator | ok: [testbed-manager] 2025-09-13 00:29:08.519270 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:29:08.519281 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:29:08.519291 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:29:08.519302 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:29:08.519313 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:29:08.519323 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:29:08.519334 | orchestrator | 2025-09-13 00:29:08.519344 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-09-13 00:29:08.519355 | orchestrator | Saturday 13 September 2025 00:28:48 +0000 (0:00:00.771) 0:00:46.376 **** 2025-09-13 00:29:08.519367 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:29:08.519380 | orchestrator | 2025-09-13 00:29:08.519391 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-09-13 00:29:08.519402 | orchestrator | Saturday 13 September 2025 00:28:48 +0000 (0:00:00.299) 0:00:46.676 **** 2025-09-13 00:29:08.519413 | orchestrator | changed: [testbed-manager] 2025-09-13 00:29:08.519424 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:29:08.519435 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:29:08.519445 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:29:08.519456 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:29:08.519467 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:29:08.519477 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:29:08.519488 | orchestrator | 2025-09-13 00:29:08.519517 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-09-13 00:29:08.519529 | orchestrator | Saturday 13 September 2025 00:28:49 +0000 (0:00:01.093) 0:00:47.769 **** 2025-09-13 00:29:08.519540 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:29:08.519551 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:29:08.519562 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:29:08.519572 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:29:08.519583 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:29:08.519594 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:29:08.519604 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:29:08.519615 | orchestrator | 2025-09-13 00:29:08.519626 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-09-13 00:29:08.519637 | orchestrator | Saturday 13 September 2025 00:28:50 +0000 (0:00:00.344) 0:00:48.114 **** 2025-09-13 00:29:08.519647 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:29:08.519658 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:29:08.519669 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:29:08.519679 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:29:08.519690 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:29:08.519701 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:29:08.519711 | orchestrator | changed: [testbed-manager] 2025-09-13 00:29:08.519732 | orchestrator | 2025-09-13 00:29:08.519743 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-09-13 00:29:08.519754 | orchestrator | Saturday 13 September 2025 00:29:02 +0000 (0:00:12.093) 0:01:00.208 **** 2025-09-13 00:29:08.519765 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:29:08.519775 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:29:08.519786 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:29:08.519797 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:29:08.519807 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:29:08.519818 | orchestrator | ok: [testbed-manager] 2025-09-13 00:29:08.519829 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:29:08.519839 | orchestrator | 2025-09-13 00:29:08.519850 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-09-13 00:29:08.519861 | orchestrator | Saturday 13 September 2025 00:29:03 +0000 (0:00:01.217) 0:01:01.426 **** 2025-09-13 00:29:08.519873 | orchestrator | ok: [testbed-manager] 2025-09-13 00:29:08.519883 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:29:08.519894 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:29:08.519904 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:29:08.519915 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:29:08.519926 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:29:08.519936 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:29:08.519947 | orchestrator | 2025-09-13 00:29:08.519958 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-09-13 00:29:08.519969 | orchestrator | Saturday 13 September 2025 00:29:04 +0000 (0:00:01.026) 0:01:02.452 **** 2025-09-13 00:29:08.519980 | orchestrator | ok: [testbed-manager] 2025-09-13 00:29:08.519991 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:29:08.520001 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:29:08.520012 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:29:08.520023 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:29:08.520034 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:29:08.520044 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:29:08.520055 | orchestrator | 2025-09-13 00:29:08.520066 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-09-13 00:29:08.520077 | orchestrator | Saturday 13 September 2025 00:29:04 +0000 (0:00:00.192) 0:01:02.645 **** 2025-09-13 00:29:08.520088 | orchestrator | ok: [testbed-manager] 2025-09-13 00:29:08.520099 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:29:08.520110 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:29:08.520120 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:29:08.520131 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:29:08.520141 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:29:08.520152 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:29:08.520163 | orchestrator | 2025-09-13 00:29:08.520173 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-09-13 00:29:08.520202 | orchestrator | Saturday 13 September 2025 00:29:05 +0000 (0:00:00.184) 0:01:02.830 **** 2025-09-13 00:29:08.520214 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:29:08.520226 | orchestrator | 2025-09-13 00:29:08.520237 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-09-13 00:29:08.520249 | orchestrator | Saturday 13 September 2025 00:29:05 +0000 (0:00:00.246) 0:01:03.076 **** 2025-09-13 00:29:08.520260 | orchestrator | ok: [testbed-manager] 2025-09-13 00:29:08.520272 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:29:08.520283 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:29:08.520294 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:29:08.520305 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:29:08.520316 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:29:08.520328 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:29:08.520339 | orchestrator | 2025-09-13 00:29:08.520350 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-09-13 00:29:08.520362 | orchestrator | Saturday 13 September 2025 00:29:07 +0000 (0:00:02.485) 0:01:05.561 **** 2025-09-13 00:29:08.520380 | orchestrator | changed: [testbed-manager] 2025-09-13 00:29:08.520391 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:29:08.520403 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:29:08.520414 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:29:08.520425 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:29:08.520436 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:29:08.520447 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:29:08.520459 | orchestrator | 2025-09-13 00:29:08.520470 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-09-13 00:29:08.520481 | orchestrator | Saturday 13 September 2025 00:29:08 +0000 (0:00:00.528) 0:01:06.090 **** 2025-09-13 00:29:08.520493 | orchestrator | ok: [testbed-manager] 2025-09-13 00:29:08.520504 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:29:08.520516 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:29:08.520527 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:29:08.520538 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:29:08.520549 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:29:08.520561 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:29:08.520572 | orchestrator | 2025-09-13 00:29:08.520591 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-09-13 00:31:26.205477 | orchestrator | Saturday 13 September 2025 00:29:08 +0000 (0:00:00.190) 0:01:06.280 **** 2025-09-13 00:31:26.205596 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:31:26.205613 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:31:26.205626 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:31:26.205637 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:31:26.205647 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:31:26.205658 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:31:26.205669 | orchestrator | ok: [testbed-manager] 2025-09-13 00:31:26.205680 | orchestrator | 2025-09-13 00:31:26.205692 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-09-13 00:31:26.205703 | orchestrator | Saturday 13 September 2025 00:29:10 +0000 (0:00:01.870) 0:01:08.151 **** 2025-09-13 00:31:26.205714 | orchestrator | changed: [testbed-manager] 2025-09-13 00:31:26.205725 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:31:26.205736 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:31:26.205746 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:31:26.205757 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:31:26.205767 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:31:26.205778 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:31:26.205788 | orchestrator | 2025-09-13 00:31:26.205800 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-09-13 00:31:26.205811 | orchestrator | Saturday 13 September 2025 00:29:12 +0000 (0:00:01.685) 0:01:09.836 **** 2025-09-13 00:31:26.205822 | orchestrator | ok: [testbed-manager] 2025-09-13 00:31:26.205833 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:31:26.205843 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:31:26.205924 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:31:26.205935 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:31:26.205963 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:31:26.205975 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:31:26.205986 | orchestrator | 2025-09-13 00:31:26.205997 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-09-13 00:31:26.206008 | orchestrator | Saturday 13 September 2025 00:29:14 +0000 (0:00:02.280) 0:01:12.117 **** 2025-09-13 00:31:26.206074 | orchestrator | ok: [testbed-manager] 2025-09-13 00:31:26.206087 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:31:26.206100 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:31:26.206112 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:31:26.206124 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:31:26.206136 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:31:26.206148 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:31:26.206162 | orchestrator | 2025-09-13 00:31:26.206175 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-09-13 00:31:26.206244 | orchestrator | Saturday 13 September 2025 00:29:53 +0000 (0:00:39.369) 0:01:51.487 **** 2025-09-13 00:31:26.206256 | orchestrator | changed: [testbed-manager] 2025-09-13 00:31:26.206267 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:31:26.206278 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:31:26.206289 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:31:26.206299 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:31:26.206310 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:31:26.206321 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:31:26.206332 | orchestrator | 2025-09-13 00:31:26.206349 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-09-13 00:31:26.206359 | orchestrator | Saturday 13 September 2025 00:31:11 +0000 (0:01:17.682) 0:03:09.170 **** 2025-09-13 00:31:26.206371 | orchestrator | ok: [testbed-manager] 2025-09-13 00:31:26.206382 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:31:26.206393 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:31:26.206403 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:31:26.206414 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:31:26.206425 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:31:26.206435 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:31:26.206446 | orchestrator | 2025-09-13 00:31:26.206457 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-09-13 00:31:26.206469 | orchestrator | Saturday 13 September 2025 00:31:13 +0000 (0:00:01.825) 0:03:10.996 **** 2025-09-13 00:31:26.206479 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:31:26.206490 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:31:26.206501 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:31:26.206511 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:31:26.206522 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:31:26.206532 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:31:26.206543 | orchestrator | changed: [testbed-manager] 2025-09-13 00:31:26.206554 | orchestrator | 2025-09-13 00:31:26.206564 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-09-13 00:31:26.206575 | orchestrator | Saturday 13 September 2025 00:31:24 +0000 (0:00:10.862) 0:03:21.859 **** 2025-09-13 00:31:26.206595 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-09-13 00:31:26.206618 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-09-13 00:31:26.206656 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-09-13 00:31:26.206671 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-09-13 00:31:26.206691 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-09-13 00:31:26.206702 | orchestrator | 2025-09-13 00:31:26.206713 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-09-13 00:31:26.206724 | orchestrator | Saturday 13 September 2025 00:31:24 +0000 (0:00:00.332) 0:03:22.192 **** 2025-09-13 00:31:26.206735 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-13 00:31:26.206746 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:31:26.206757 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-13 00:31:26.206767 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-13 00:31:26.206778 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:31:26.206789 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:31:26.206799 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-13 00:31:26.206810 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:31:26.206821 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-13 00:31:26.206831 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-13 00:31:26.206842 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-13 00:31:26.206852 | orchestrator | 2025-09-13 00:31:26.206863 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-09-13 00:31:26.206886 | orchestrator | Saturday 13 September 2025 00:31:26 +0000 (0:00:01.681) 0:03:23.873 **** 2025-09-13 00:31:26.206898 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-13 00:31:26.206909 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-13 00:31:26.206920 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-13 00:31:26.206931 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-13 00:31:26.206942 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-13 00:31:26.206952 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-13 00:31:26.206963 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-13 00:31:26.206973 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-13 00:31:26.206984 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-13 00:31:26.206995 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-13 00:31:26.207006 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:31:26.207016 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-13 00:31:26.207027 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-13 00:31:26.207037 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-13 00:31:26.207048 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-13 00:31:26.207059 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-13 00:31:26.207070 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-13 00:31:26.207087 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-13 00:31:26.207098 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-13 00:31:26.207109 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-13 00:31:26.207119 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-13 00:31:26.207137 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-13 00:31:32.910456 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-13 00:31:32.910565 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-13 00:31:32.910580 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-13 00:31:32.910592 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-13 00:31:32.910604 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-13 00:31:32.910615 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-13 00:31:32.910627 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-13 00:31:32.910637 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-13 00:31:32.910648 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-13 00:31:32.910659 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-13 00:31:32.910670 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-13 00:31:32.910681 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-13 00:31:32.910691 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-13 00:31:32.910702 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-13 00:31:32.910713 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-13 00:31:32.910724 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:31:32.910736 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-13 00:31:32.910747 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-13 00:31:32.910757 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-13 00:31:32.910784 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-13 00:31:32.910797 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:31:32.910808 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:31:32.910819 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-13 00:31:32.910830 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-13 00:31:32.910841 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-13 00:31:32.910851 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-13 00:31:32.910862 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-13 00:31:32.910873 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-13 00:31:32.910884 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-13 00:31:32.910916 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-13 00:31:32.910928 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-13 00:31:32.910939 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-13 00:31:32.910949 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-13 00:31:32.910960 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-13 00:31:32.910970 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-13 00:31:32.910981 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-13 00:31:32.910993 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-13 00:31:32.911006 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-13 00:31:32.911018 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-13 00:31:32.911030 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-13 00:31:32.911042 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-13 00:31:32.911054 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-13 00:31:32.911067 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-13 00:31:32.911096 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-13 00:31:32.911109 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-13 00:31:32.911122 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-13 00:31:32.911134 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-13 00:31:32.911147 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-13 00:31:32.911159 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-13 00:31:32.911171 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-13 00:31:32.911183 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-13 00:31:32.911227 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-13 00:31:32.911240 | orchestrator | 2025-09-13 00:31:32.911254 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-09-13 00:31:32.911266 | orchestrator | Saturday 13 September 2025 00:31:30 +0000 (0:00:04.710) 0:03:28.584 **** 2025-09-13 00:31:32.911279 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-13 00:31:32.911292 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-13 00:31:32.911305 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-13 00:31:32.911317 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-13 00:31:32.911329 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-13 00:31:32.911342 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-13 00:31:32.911353 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-13 00:31:32.911364 | orchestrator | 2025-09-13 00:31:32.911375 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-09-13 00:31:32.911395 | orchestrator | Saturday 13 September 2025 00:31:31 +0000 (0:00:00.638) 0:03:29.222 **** 2025-09-13 00:31:32.911406 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-13 00:31:32.911416 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:31:32.911437 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-13 00:31:32.911448 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-13 00:31:32.911460 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:31:32.911471 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:31:32.911482 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-13 00:31:32.911493 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:31:32.911504 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-13 00:31:32.911515 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-13 00:31:32.911526 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-13 00:31:32.911537 | orchestrator | 2025-09-13 00:31:32.911547 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-09-13 00:31:32.911558 | orchestrator | Saturday 13 September 2025 00:31:31 +0000 (0:00:00.537) 0:03:29.759 **** 2025-09-13 00:31:32.911569 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-13 00:31:32.911580 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:31:32.911590 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-13 00:31:32.911601 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-13 00:31:32.911612 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:31:32.911623 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:31:32.911633 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-13 00:31:32.911644 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:31:32.911655 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-13 00:31:32.911666 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-13 00:31:32.911676 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-13 00:31:32.911687 | orchestrator | 2025-09-13 00:31:32.911698 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-09-13 00:31:32.911709 | orchestrator | Saturday 13 September 2025 00:31:32 +0000 (0:00:00.584) 0:03:30.344 **** 2025-09-13 00:31:32.911720 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:31:32.911731 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:31:32.911742 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:31:32.911752 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:31:32.911763 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:31:32.911780 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:31:44.832938 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:31:44.833048 | orchestrator | 2025-09-13 00:31:44.833065 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-09-13 00:31:44.833078 | orchestrator | Saturday 13 September 2025 00:31:32 +0000 (0:00:00.334) 0:03:30.679 **** 2025-09-13 00:31:44.833090 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:31:44.833102 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:31:44.833113 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:31:44.833124 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:31:44.833159 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:31:44.833171 | orchestrator | ok: [testbed-manager] 2025-09-13 00:31:44.833181 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:31:44.833231 | orchestrator | 2025-09-13 00:31:44.833243 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-09-13 00:31:44.833253 | orchestrator | Saturday 13 September 2025 00:31:38 +0000 (0:00:05.874) 0:03:36.553 **** 2025-09-13 00:31:44.833264 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-09-13 00:31:44.833275 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:31:44.833286 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-09-13 00:31:44.833297 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-09-13 00:31:44.833307 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:31:44.833318 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-09-13 00:31:44.833328 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:31:44.833339 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:31:44.833350 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-09-13 00:31:44.833360 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-09-13 00:31:44.833371 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:31:44.833386 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:31:44.833397 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-09-13 00:31:44.833407 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:31:44.833418 | orchestrator | 2025-09-13 00:31:44.833429 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-09-13 00:31:44.833440 | orchestrator | Saturday 13 September 2025 00:31:39 +0000 (0:00:00.326) 0:03:36.879 **** 2025-09-13 00:31:44.833450 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-09-13 00:31:44.833461 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-09-13 00:31:44.833472 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-09-13 00:31:44.833484 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-09-13 00:31:44.833496 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-09-13 00:31:44.833508 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-09-13 00:31:44.833520 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-09-13 00:31:44.833531 | orchestrator | 2025-09-13 00:31:44.833544 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-09-13 00:31:44.833572 | orchestrator | Saturday 13 September 2025 00:31:40 +0000 (0:00:01.083) 0:03:37.963 **** 2025-09-13 00:31:44.833588 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:31:44.833603 | orchestrator | 2025-09-13 00:31:44.833616 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-09-13 00:31:44.833628 | orchestrator | Saturday 13 September 2025 00:31:40 +0000 (0:00:00.476) 0:03:38.440 **** 2025-09-13 00:31:44.833641 | orchestrator | ok: [testbed-manager] 2025-09-13 00:31:44.833653 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:31:44.833665 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:31:44.833678 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:31:44.833690 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:31:44.833701 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:31:44.833712 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:31:44.833722 | orchestrator | 2025-09-13 00:31:44.833733 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-09-13 00:31:44.833744 | orchestrator | Saturday 13 September 2025 00:31:42 +0000 (0:00:01.384) 0:03:39.825 **** 2025-09-13 00:31:44.833755 | orchestrator | ok: [testbed-manager] 2025-09-13 00:31:44.833765 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:31:44.833776 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:31:44.833787 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:31:44.833797 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:31:44.833808 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:31:44.833827 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:31:44.833838 | orchestrator | 2025-09-13 00:31:44.833849 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-09-13 00:31:44.833860 | orchestrator | Saturday 13 September 2025 00:31:42 +0000 (0:00:00.621) 0:03:40.446 **** 2025-09-13 00:31:44.833871 | orchestrator | changed: [testbed-manager] 2025-09-13 00:31:44.833881 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:31:44.833892 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:31:44.833903 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:31:44.833913 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:31:44.833924 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:31:44.833934 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:31:44.833945 | orchestrator | 2025-09-13 00:31:44.833956 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-09-13 00:31:44.833967 | orchestrator | Saturday 13 September 2025 00:31:43 +0000 (0:00:00.626) 0:03:41.073 **** 2025-09-13 00:31:44.833977 | orchestrator | ok: [testbed-manager] 2025-09-13 00:31:44.833988 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:31:44.833998 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:31:44.834009 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:31:44.834070 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:31:44.834082 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:31:44.834092 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:31:44.834103 | orchestrator | 2025-09-13 00:31:44.834114 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-09-13 00:31:44.834125 | orchestrator | Saturday 13 September 2025 00:31:43 +0000 (0:00:00.610) 0:03:41.684 **** 2025-09-13 00:31:44.834159 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757721842.2243629, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-13 00:31:44.834175 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757721878.5118976, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-13 00:31:44.834210 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757721861.9717815, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-13 00:31:44.834229 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757721873.2069702, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-13 00:31:44.834241 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757721864.017074, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-13 00:31:44.834261 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757721881.2538645, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-13 00:31:44.834272 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757721863.328536, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-13 00:31:44.834302 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-13 00:32:00.926610 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-13 00:32:00.926725 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-13 00:32:00.926741 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-13 00:32:00.926776 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-13 00:32:00.926788 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-13 00:32:00.926800 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-13 00:32:00.926812 | orchestrator | 2025-09-13 00:32:00.926825 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-09-13 00:32:00.926838 | orchestrator | Saturday 13 September 2025 00:31:44 +0000 (0:00:00.910) 0:03:42.594 **** 2025-09-13 00:32:00.926849 | orchestrator | changed: [testbed-manager] 2025-09-13 00:32:00.926861 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:32:00.926872 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:32:00.926882 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:32:00.926893 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:32:00.926904 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:32:00.926915 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:32:00.926925 | orchestrator | 2025-09-13 00:32:00.926936 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-09-13 00:32:00.926947 | orchestrator | Saturday 13 September 2025 00:31:45 +0000 (0:00:01.097) 0:03:43.692 **** 2025-09-13 00:32:00.926958 | orchestrator | changed: [testbed-manager] 2025-09-13 00:32:00.926969 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:32:00.926980 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:32:00.926991 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:32:00.927018 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:32:00.927030 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:32:00.927041 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:32:00.927052 | orchestrator | 2025-09-13 00:32:00.927062 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-09-13 00:32:00.927073 | orchestrator | Saturday 13 September 2025 00:31:47 +0000 (0:00:01.174) 0:03:44.866 **** 2025-09-13 00:32:00.927084 | orchestrator | changed: [testbed-manager] 2025-09-13 00:32:00.927112 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:32:00.927124 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:32:00.927135 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:32:00.927146 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:32:00.927157 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:32:00.927167 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:32:00.927178 | orchestrator | 2025-09-13 00:32:00.927215 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-09-13 00:32:00.927227 | orchestrator | Saturday 13 September 2025 00:31:48 +0000 (0:00:01.192) 0:03:46.059 **** 2025-09-13 00:32:00.927246 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:32:00.927257 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:32:00.927268 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:32:00.927279 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:32:00.927289 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:32:00.927300 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:32:00.927310 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:32:00.927321 | orchestrator | 2025-09-13 00:32:00.927332 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-09-13 00:32:00.927344 | orchestrator | Saturday 13 September 2025 00:31:48 +0000 (0:00:00.291) 0:03:46.351 **** 2025-09-13 00:32:00.927355 | orchestrator | ok: [testbed-manager] 2025-09-13 00:32:00.927366 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:32:00.927377 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:32:00.927388 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:32:00.927399 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:32:00.927409 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:32:00.927420 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:32:00.927431 | orchestrator | 2025-09-13 00:32:00.927442 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-09-13 00:32:00.927453 | orchestrator | Saturday 13 September 2025 00:31:49 +0000 (0:00:00.749) 0:03:47.101 **** 2025-09-13 00:32:00.927471 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:32:00.927485 | orchestrator | 2025-09-13 00:32:00.927496 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-09-13 00:32:00.927507 | orchestrator | Saturday 13 September 2025 00:31:49 +0000 (0:00:00.457) 0:03:47.559 **** 2025-09-13 00:32:00.927518 | orchestrator | ok: [testbed-manager] 2025-09-13 00:32:00.927529 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:32:00.927540 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:32:00.927550 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:32:00.927561 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:32:00.927572 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:32:00.927583 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:32:00.927593 | orchestrator | 2025-09-13 00:32:00.927604 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-09-13 00:32:00.927615 | orchestrator | Saturday 13 September 2025 00:31:57 +0000 (0:00:07.643) 0:03:55.202 **** 2025-09-13 00:32:00.927626 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:32:00.927637 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:32:00.927648 | orchestrator | ok: [testbed-manager] 2025-09-13 00:32:00.927658 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:32:00.927669 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:32:00.927680 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:32:00.927691 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:32:00.927702 | orchestrator | 2025-09-13 00:32:00.927713 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-09-13 00:32:00.927724 | orchestrator | Saturday 13 September 2025 00:31:58 +0000 (0:00:01.267) 0:03:56.470 **** 2025-09-13 00:32:00.927735 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:32:00.927745 | orchestrator | ok: [testbed-manager] 2025-09-13 00:32:00.927756 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:32:00.927767 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:32:00.927777 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:32:00.927788 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:32:00.927799 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:32:00.927809 | orchestrator | 2025-09-13 00:32:00.927820 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-09-13 00:32:00.927831 | orchestrator | Saturday 13 September 2025 00:31:59 +0000 (0:00:01.033) 0:03:57.504 **** 2025-09-13 00:32:00.927842 | orchestrator | ok: [testbed-manager] 2025-09-13 00:32:00.927859 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:32:00.927870 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:32:00.927881 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:32:00.927892 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:32:00.927902 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:32:00.927913 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:32:00.927924 | orchestrator | 2025-09-13 00:32:00.927935 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-09-13 00:32:00.927947 | orchestrator | Saturday 13 September 2025 00:32:00 +0000 (0:00:00.322) 0:03:57.827 **** 2025-09-13 00:32:00.927958 | orchestrator | ok: [testbed-manager] 2025-09-13 00:32:00.927969 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:32:00.927979 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:32:00.927990 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:32:00.928001 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:32:00.928011 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:32:00.928022 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:32:00.928033 | orchestrator | 2025-09-13 00:32:00.928044 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-09-13 00:32:00.928055 | orchestrator | Saturday 13 September 2025 00:32:00 +0000 (0:00:00.519) 0:03:58.346 **** 2025-09-13 00:32:00.928066 | orchestrator | ok: [testbed-manager] 2025-09-13 00:32:00.928076 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:32:00.928087 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:32:00.928098 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:32:00.928109 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:32:00.928126 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:33:09.902824 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:33:09.902933 | orchestrator | 2025-09-13 00:33:09.902952 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-09-13 00:33:09.902966 | orchestrator | Saturday 13 September 2025 00:32:00 +0000 (0:00:00.346) 0:03:58.692 **** 2025-09-13 00:33:09.902977 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:33:09.902989 | orchestrator | ok: [testbed-manager] 2025-09-13 00:33:09.903000 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:33:09.903011 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:33:09.903022 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:33:09.903033 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:33:09.903044 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:33:09.903055 | orchestrator | 2025-09-13 00:33:09.903066 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-09-13 00:33:09.903078 | orchestrator | Saturday 13 September 2025 00:32:06 +0000 (0:00:05.788) 0:04:04.481 **** 2025-09-13 00:33:09.903092 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:33:09.903106 | orchestrator | 2025-09-13 00:33:09.903118 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-09-13 00:33:09.903129 | orchestrator | Saturday 13 September 2025 00:32:07 +0000 (0:00:00.421) 0:04:04.902 **** 2025-09-13 00:33:09.903141 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-09-13 00:33:09.903152 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-09-13 00:33:09.903163 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-09-13 00:33:09.903174 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-09-13 00:33:09.903185 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:33:09.903240 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-09-13 00:33:09.903252 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:33:09.903263 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-09-13 00:33:09.903274 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-09-13 00:33:09.903285 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-09-13 00:33:09.903296 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:33:09.903334 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-09-13 00:33:09.903361 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-09-13 00:33:09.903375 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:33:09.903388 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-09-13 00:33:09.903400 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:33:09.903413 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-09-13 00:33:09.903426 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:33:09.903439 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-09-13 00:33:09.903451 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-09-13 00:33:09.903464 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:33:09.903476 | orchestrator | 2025-09-13 00:33:09.903489 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-09-13 00:33:09.903502 | orchestrator | Saturday 13 September 2025 00:32:07 +0000 (0:00:00.377) 0:04:05.279 **** 2025-09-13 00:33:09.903515 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:33:09.903528 | orchestrator | 2025-09-13 00:33:09.903541 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-09-13 00:33:09.903554 | orchestrator | Saturday 13 September 2025 00:32:07 +0000 (0:00:00.490) 0:04:05.770 **** 2025-09-13 00:33:09.903567 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-09-13 00:33:09.903579 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:33:09.903591 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-09-13 00:33:09.903604 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-09-13 00:33:09.903616 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:33:09.903629 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:33:09.903641 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-09-13 00:33:09.903654 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-09-13 00:33:09.903667 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:33:09.903679 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-09-13 00:33:09.903693 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:33:09.903705 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:33:09.903718 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-09-13 00:33:09.903728 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:33:09.903739 | orchestrator | 2025-09-13 00:33:09.903750 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-09-13 00:33:09.903761 | orchestrator | Saturday 13 September 2025 00:32:08 +0000 (0:00:00.406) 0:04:06.177 **** 2025-09-13 00:33:09.903772 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:33:09.903783 | orchestrator | 2025-09-13 00:33:09.903794 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-09-13 00:33:09.903805 | orchestrator | Saturday 13 September 2025 00:32:08 +0000 (0:00:00.426) 0:04:06.604 **** 2025-09-13 00:33:09.903816 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:33:09.903844 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:33:09.903856 | orchestrator | changed: [testbed-manager] 2025-09-13 00:33:09.903867 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:33:09.903878 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:33:09.903888 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:33:09.903899 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:33:09.903910 | orchestrator | 2025-09-13 00:33:09.903921 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-09-13 00:33:09.903940 | orchestrator | Saturday 13 September 2025 00:32:43 +0000 (0:00:34.321) 0:04:40.926 **** 2025-09-13 00:33:09.903951 | orchestrator | changed: [testbed-manager] 2025-09-13 00:33:09.903962 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:33:09.903972 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:33:09.903983 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:33:09.903994 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:33:09.904005 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:33:09.904016 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:33:09.904026 | orchestrator | 2025-09-13 00:33:09.904037 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-09-13 00:33:09.904048 | orchestrator | Saturday 13 September 2025 00:32:50 +0000 (0:00:07.529) 0:04:48.455 **** 2025-09-13 00:33:09.904059 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:33:09.904070 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:33:09.904080 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:33:09.904091 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:33:09.904102 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:33:09.904112 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:33:09.904123 | orchestrator | changed: [testbed-manager] 2025-09-13 00:33:09.904134 | orchestrator | 2025-09-13 00:33:09.904145 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-09-13 00:33:09.904155 | orchestrator | Saturday 13 September 2025 00:32:58 +0000 (0:00:07.711) 0:04:56.166 **** 2025-09-13 00:33:09.904166 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:33:09.904177 | orchestrator | ok: [testbed-manager] 2025-09-13 00:33:09.904188 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:33:09.904216 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:33:09.904227 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:33:09.904238 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:33:09.904249 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:33:09.904259 | orchestrator | 2025-09-13 00:33:09.904270 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-09-13 00:33:09.904282 | orchestrator | Saturday 13 September 2025 00:33:00 +0000 (0:00:01.621) 0:04:57.788 **** 2025-09-13 00:33:09.904293 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:33:09.904304 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:33:09.904320 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:33:09.904331 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:33:09.904342 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:33:09.904352 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:33:09.904363 | orchestrator | changed: [testbed-manager] 2025-09-13 00:33:09.904373 | orchestrator | 2025-09-13 00:33:09.904384 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-09-13 00:33:09.904395 | orchestrator | Saturday 13 September 2025 00:33:05 +0000 (0:00:05.754) 0:05:03.543 **** 2025-09-13 00:33:09.904407 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:33:09.904419 | orchestrator | 2025-09-13 00:33:09.904430 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-09-13 00:33:09.904441 | orchestrator | Saturday 13 September 2025 00:33:06 +0000 (0:00:00.568) 0:05:04.111 **** 2025-09-13 00:33:09.904451 | orchestrator | changed: [testbed-manager] 2025-09-13 00:33:09.904462 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:33:09.904473 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:33:09.904483 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:33:09.904494 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:33:09.904505 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:33:09.904516 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:33:09.904526 | orchestrator | 2025-09-13 00:33:09.904537 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-09-13 00:33:09.904555 | orchestrator | Saturday 13 September 2025 00:33:07 +0000 (0:00:00.725) 0:05:04.837 **** 2025-09-13 00:33:09.904566 | orchestrator | ok: [testbed-manager] 2025-09-13 00:33:09.904577 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:33:09.904588 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:33:09.904598 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:33:09.904609 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:33:09.904620 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:33:09.904630 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:33:09.904641 | orchestrator | 2025-09-13 00:33:09.904652 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-09-13 00:33:09.904663 | orchestrator | Saturday 13 September 2025 00:33:08 +0000 (0:00:01.807) 0:05:06.645 **** 2025-09-13 00:33:09.904674 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:33:09.904684 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:33:09.904695 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:33:09.904706 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:33:09.904717 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:33:09.904727 | orchestrator | changed: [testbed-manager] 2025-09-13 00:33:09.904738 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:33:09.904749 | orchestrator | 2025-09-13 00:33:09.904760 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-09-13 00:33:09.904771 | orchestrator | Saturday 13 September 2025 00:33:09 +0000 (0:00:00.766) 0:05:07.412 **** 2025-09-13 00:33:09.904781 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:33:09.904792 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:33:09.904803 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:33:09.904814 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:33:09.904824 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:33:09.904835 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:33:09.904846 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:33:09.904856 | orchestrator | 2025-09-13 00:33:09.904867 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-09-13 00:33:09.904885 | orchestrator | Saturday 13 September 2025 00:33:09 +0000 (0:00:00.255) 0:05:07.667 **** 2025-09-13 00:33:35.758605 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:33:35.758721 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:33:35.758736 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:33:35.758748 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:33:35.758759 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:33:35.758771 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:33:35.758781 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:33:35.758793 | orchestrator | 2025-09-13 00:33:35.758806 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-09-13 00:33:35.758818 | orchestrator | Saturday 13 September 2025 00:33:10 +0000 (0:00:00.426) 0:05:08.094 **** 2025-09-13 00:33:35.758829 | orchestrator | ok: [testbed-manager] 2025-09-13 00:33:35.758841 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:33:35.758852 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:33:35.758862 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:33:35.758873 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:33:35.758884 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:33:35.758894 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:33:35.758905 | orchestrator | 2025-09-13 00:33:35.758916 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-09-13 00:33:35.758927 | orchestrator | Saturday 13 September 2025 00:33:10 +0000 (0:00:00.310) 0:05:08.404 **** 2025-09-13 00:33:35.758939 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:33:35.758950 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:33:35.758961 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:33:35.758971 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:33:35.758982 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:33:35.758993 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:33:35.759004 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:33:35.759040 | orchestrator | 2025-09-13 00:33:35.759052 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-09-13 00:33:35.759063 | orchestrator | Saturday 13 September 2025 00:33:10 +0000 (0:00:00.287) 0:05:08.692 **** 2025-09-13 00:33:35.759074 | orchestrator | ok: [testbed-manager] 2025-09-13 00:33:35.759085 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:33:35.759095 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:33:35.759107 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:33:35.759119 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:33:35.759131 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:33:35.759143 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:33:35.759155 | orchestrator | 2025-09-13 00:33:35.759167 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-09-13 00:33:35.759179 | orchestrator | Saturday 13 September 2025 00:33:11 +0000 (0:00:00.325) 0:05:09.017 **** 2025-09-13 00:33:35.759213 | orchestrator | ok: [testbed-manager] =>  2025-09-13 00:33:35.759227 | orchestrator |  docker_version: 5:27.5.1 2025-09-13 00:33:35.759239 | orchestrator | ok: [testbed-node-3] =>  2025-09-13 00:33:35.759251 | orchestrator |  docker_version: 5:27.5.1 2025-09-13 00:33:35.759263 | orchestrator | ok: [testbed-node-4] =>  2025-09-13 00:33:35.759275 | orchestrator |  docker_version: 5:27.5.1 2025-09-13 00:33:35.759287 | orchestrator | ok: [testbed-node-5] =>  2025-09-13 00:33:35.759299 | orchestrator |  docker_version: 5:27.5.1 2025-09-13 00:33:35.759311 | orchestrator | ok: [testbed-node-0] =>  2025-09-13 00:33:35.759323 | orchestrator |  docker_version: 5:27.5.1 2025-09-13 00:33:35.759335 | orchestrator | ok: [testbed-node-1] =>  2025-09-13 00:33:35.759347 | orchestrator |  docker_version: 5:27.5.1 2025-09-13 00:33:35.759360 | orchestrator | ok: [testbed-node-2] =>  2025-09-13 00:33:35.759372 | orchestrator |  docker_version: 5:27.5.1 2025-09-13 00:33:35.759385 | orchestrator | 2025-09-13 00:33:35.759397 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-09-13 00:33:35.759410 | orchestrator | Saturday 13 September 2025 00:33:11 +0000 (0:00:00.328) 0:05:09.345 **** 2025-09-13 00:33:35.759422 | orchestrator | ok: [testbed-manager] =>  2025-09-13 00:33:35.759434 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-13 00:33:35.759446 | orchestrator | ok: [testbed-node-3] =>  2025-09-13 00:33:35.759458 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-13 00:33:35.759469 | orchestrator | ok: [testbed-node-4] =>  2025-09-13 00:33:35.759480 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-13 00:33:35.759490 | orchestrator | ok: [testbed-node-5] =>  2025-09-13 00:33:35.759501 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-13 00:33:35.759512 | orchestrator | ok: [testbed-node-0] =>  2025-09-13 00:33:35.759522 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-13 00:33:35.759533 | orchestrator | ok: [testbed-node-1] =>  2025-09-13 00:33:35.759544 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-13 00:33:35.759554 | orchestrator | ok: [testbed-node-2] =>  2025-09-13 00:33:35.759565 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-13 00:33:35.759576 | orchestrator | 2025-09-13 00:33:35.759586 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-09-13 00:33:35.759597 | orchestrator | Saturday 13 September 2025 00:33:11 +0000 (0:00:00.273) 0:05:09.619 **** 2025-09-13 00:33:35.759608 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:33:35.759619 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:33:35.759629 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:33:35.759640 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:33:35.759651 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:33:35.759661 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:33:35.759672 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:33:35.759682 | orchestrator | 2025-09-13 00:33:35.759693 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-09-13 00:33:35.759704 | orchestrator | Saturday 13 September 2025 00:33:12 +0000 (0:00:00.262) 0:05:09.882 **** 2025-09-13 00:33:35.759715 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:33:35.759733 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:33:35.759744 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:33:35.759755 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:33:35.759765 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:33:35.759776 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:33:35.759787 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:33:35.759797 | orchestrator | 2025-09-13 00:33:35.759808 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-09-13 00:33:35.759819 | orchestrator | Saturday 13 September 2025 00:33:12 +0000 (0:00:00.294) 0:05:10.176 **** 2025-09-13 00:33:35.759848 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:33:35.759863 | orchestrator | 2025-09-13 00:33:35.759874 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-09-13 00:33:35.759885 | orchestrator | Saturday 13 September 2025 00:33:12 +0000 (0:00:00.414) 0:05:10.591 **** 2025-09-13 00:33:35.759896 | orchestrator | ok: [testbed-manager] 2025-09-13 00:33:35.759907 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:33:35.759917 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:33:35.759928 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:33:35.759939 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:33:35.759949 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:33:35.759960 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:33:35.759971 | orchestrator | 2025-09-13 00:33:35.759982 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-09-13 00:33:35.759993 | orchestrator | Saturday 13 September 2025 00:33:13 +0000 (0:00:00.827) 0:05:11.419 **** 2025-09-13 00:33:35.760003 | orchestrator | ok: [testbed-manager] 2025-09-13 00:33:35.760014 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:33:35.760043 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:33:35.760054 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:33:35.760065 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:33:35.760076 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:33:35.760086 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:33:35.760097 | orchestrator | 2025-09-13 00:33:35.760108 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-09-13 00:33:35.760120 | orchestrator | Saturday 13 September 2025 00:33:16 +0000 (0:00:03.279) 0:05:14.698 **** 2025-09-13 00:33:35.760131 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-09-13 00:33:35.760142 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-09-13 00:33:35.760152 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-09-13 00:33:35.760163 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-09-13 00:33:35.760174 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-09-13 00:33:35.760184 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-09-13 00:33:35.760213 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:33:35.760224 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-09-13 00:33:35.760235 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-09-13 00:33:35.760245 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-09-13 00:33:35.760256 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:33:35.760267 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-09-13 00:33:35.760282 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-09-13 00:33:35.760293 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:33:35.760304 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-09-13 00:33:35.760314 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-09-13 00:33:35.760325 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-09-13 00:33:35.760336 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-09-13 00:33:35.760355 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:33:35.760366 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-09-13 00:33:35.760377 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-09-13 00:33:35.760387 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-09-13 00:33:35.760398 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:33:35.760409 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:33:35.760419 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-09-13 00:33:35.760430 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-09-13 00:33:35.760440 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-09-13 00:33:35.760451 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:33:35.760462 | orchestrator | 2025-09-13 00:33:35.760472 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-09-13 00:33:35.760483 | orchestrator | Saturday 13 September 2025 00:33:17 +0000 (0:00:00.681) 0:05:15.379 **** 2025-09-13 00:33:35.760494 | orchestrator | ok: [testbed-manager] 2025-09-13 00:33:35.760505 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:33:35.760515 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:33:35.760526 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:33:35.760537 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:33:35.760547 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:33:35.760558 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:33:35.760568 | orchestrator | 2025-09-13 00:33:35.760579 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-09-13 00:33:35.760590 | orchestrator | Saturday 13 September 2025 00:33:23 +0000 (0:00:06.105) 0:05:21.485 **** 2025-09-13 00:33:35.760601 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:33:35.760612 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:33:35.760622 | orchestrator | ok: [testbed-manager] 2025-09-13 00:33:35.760633 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:33:35.760643 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:33:35.760654 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:33:35.760664 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:33:35.760675 | orchestrator | 2025-09-13 00:33:35.760686 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-09-13 00:33:35.760696 | orchestrator | Saturday 13 September 2025 00:33:25 +0000 (0:00:01.406) 0:05:22.891 **** 2025-09-13 00:33:35.760707 | orchestrator | ok: [testbed-manager] 2025-09-13 00:33:35.760718 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:33:35.760728 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:33:35.760739 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:33:35.760750 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:33:35.760760 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:33:35.760771 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:33:35.760781 | orchestrator | 2025-09-13 00:33:35.760792 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-09-13 00:33:35.760803 | orchestrator | Saturday 13 September 2025 00:33:32 +0000 (0:00:07.444) 0:05:30.336 **** 2025-09-13 00:33:35.760814 | orchestrator | changed: [testbed-manager] 2025-09-13 00:33:35.760825 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:33:35.760835 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:33:35.760855 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:34:17.956775 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:34:17.956876 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:34:17.956889 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:34:17.956901 | orchestrator | 2025-09-13 00:34:17.956913 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-09-13 00:34:17.956926 | orchestrator | Saturday 13 September 2025 00:33:35 +0000 (0:00:03.184) 0:05:33.520 **** 2025-09-13 00:34:17.956938 | orchestrator | ok: [testbed-manager] 2025-09-13 00:34:17.956949 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:34:17.956960 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:34:17.956993 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:34:17.957004 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:34:17.957015 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:34:17.957025 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:34:17.957036 | orchestrator | 2025-09-13 00:34:17.957047 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-09-13 00:34:17.957058 | orchestrator | Saturday 13 September 2025 00:33:37 +0000 (0:00:01.261) 0:05:34.781 **** 2025-09-13 00:34:17.957068 | orchestrator | ok: [testbed-manager] 2025-09-13 00:34:17.957079 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:34:17.957089 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:34:17.957100 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:34:17.957110 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:34:17.957121 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:34:17.957132 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:34:17.957142 | orchestrator | 2025-09-13 00:34:17.957153 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-09-13 00:34:17.957164 | orchestrator | Saturday 13 September 2025 00:33:38 +0000 (0:00:01.233) 0:05:36.015 **** 2025-09-13 00:34:17.957175 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:34:17.957185 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:34:17.957243 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:34:17.957254 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:34:17.957265 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:34:17.957276 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:34:17.957286 | orchestrator | changed: [testbed-manager] 2025-09-13 00:34:17.957297 | orchestrator | 2025-09-13 00:34:17.957310 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-09-13 00:34:17.957323 | orchestrator | Saturday 13 September 2025 00:33:38 +0000 (0:00:00.648) 0:05:36.663 **** 2025-09-13 00:34:17.957335 | orchestrator | ok: [testbed-manager] 2025-09-13 00:34:17.957348 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:34:17.957361 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:34:17.957387 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:34:17.957400 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:34:17.957412 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:34:17.957425 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:34:17.957437 | orchestrator | 2025-09-13 00:34:17.957449 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-09-13 00:34:17.957461 | orchestrator | Saturday 13 September 2025 00:33:48 +0000 (0:00:09.662) 0:05:46.325 **** 2025-09-13 00:34:17.957474 | orchestrator | changed: [testbed-manager] 2025-09-13 00:34:17.957486 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:34:17.957499 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:34:17.957511 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:34:17.957523 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:34:17.957535 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:34:17.957547 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:34:17.957559 | orchestrator | 2025-09-13 00:34:17.957572 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-09-13 00:34:17.957585 | orchestrator | Saturday 13 September 2025 00:33:49 +0000 (0:00:00.901) 0:05:47.227 **** 2025-09-13 00:34:17.957597 | orchestrator | ok: [testbed-manager] 2025-09-13 00:34:17.957610 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:34:17.957622 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:34:17.957635 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:34:17.957647 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:34:17.957659 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:34:17.957669 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:34:17.957679 | orchestrator | 2025-09-13 00:34:17.957690 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-09-13 00:34:17.957701 | orchestrator | Saturday 13 September 2025 00:33:57 +0000 (0:00:08.535) 0:05:55.763 **** 2025-09-13 00:34:17.957721 | orchestrator | ok: [testbed-manager] 2025-09-13 00:34:17.957732 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:34:17.957742 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:34:17.957753 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:34:17.957764 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:34:17.957774 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:34:17.957785 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:34:17.957795 | orchestrator | 2025-09-13 00:34:17.957806 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-09-13 00:34:17.957817 | orchestrator | Saturday 13 September 2025 00:34:08 +0000 (0:00:10.899) 0:06:06.662 **** 2025-09-13 00:34:17.957828 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-09-13 00:34:17.957839 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-09-13 00:34:17.957849 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-09-13 00:34:17.957860 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-09-13 00:34:17.957871 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-09-13 00:34:17.957881 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-09-13 00:34:17.957892 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-09-13 00:34:17.957902 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-09-13 00:34:17.957913 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-09-13 00:34:17.957923 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-09-13 00:34:17.957934 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-09-13 00:34:17.957945 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-09-13 00:34:17.957955 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-09-13 00:34:17.957966 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-09-13 00:34:17.957977 | orchestrator | 2025-09-13 00:34:17.957988 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-09-13 00:34:17.958014 | orchestrator | Saturday 13 September 2025 00:34:10 +0000 (0:00:01.147) 0:06:07.809 **** 2025-09-13 00:34:17.958086 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:34:17.958097 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:34:17.958108 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:34:17.958118 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:34:17.958129 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:34:17.958140 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:34:17.958150 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:34:17.958161 | orchestrator | 2025-09-13 00:34:17.958172 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-09-13 00:34:17.958183 | orchestrator | Saturday 13 September 2025 00:34:10 +0000 (0:00:00.479) 0:06:08.289 **** 2025-09-13 00:34:17.958214 | orchestrator | ok: [testbed-manager] 2025-09-13 00:34:17.958226 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:34:17.958236 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:34:17.958247 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:34:17.958257 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:34:17.958268 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:34:17.958279 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:34:17.958289 | orchestrator | 2025-09-13 00:34:17.958300 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-09-13 00:34:17.958313 | orchestrator | Saturday 13 September 2025 00:34:14 +0000 (0:00:03.653) 0:06:11.943 **** 2025-09-13 00:34:17.958324 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:34:17.958334 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:34:17.958345 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:34:17.958356 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:34:17.958366 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:34:17.958377 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:34:17.958387 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:34:17.958407 | orchestrator | 2025-09-13 00:34:17.958418 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-09-13 00:34:17.958430 | orchestrator | Saturday 13 September 2025 00:34:14 +0000 (0:00:00.381) 0:06:12.324 **** 2025-09-13 00:34:17.958441 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-09-13 00:34:17.958452 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-09-13 00:34:17.958463 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:34:17.958474 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-09-13 00:34:17.958490 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-09-13 00:34:17.958501 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:34:17.958511 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-09-13 00:34:17.958522 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-09-13 00:34:17.958533 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:34:17.958543 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-09-13 00:34:17.958554 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-09-13 00:34:17.958565 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:34:17.958575 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-09-13 00:34:17.958586 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-09-13 00:34:17.958597 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:34:17.958607 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-09-13 00:34:17.958618 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-09-13 00:34:17.958628 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:34:17.958639 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-09-13 00:34:17.958649 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-09-13 00:34:17.958660 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:34:17.958671 | orchestrator | 2025-09-13 00:34:17.958681 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-09-13 00:34:17.958692 | orchestrator | Saturday 13 September 2025 00:34:15 +0000 (0:00:00.558) 0:06:12.883 **** 2025-09-13 00:34:17.958703 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:34:17.958714 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:34:17.958725 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:34:17.958735 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:34:17.958746 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:34:17.958757 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:34:17.958767 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:34:17.958778 | orchestrator | 2025-09-13 00:34:17.958788 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-09-13 00:34:17.958799 | orchestrator | Saturday 13 September 2025 00:34:15 +0000 (0:00:00.439) 0:06:13.323 **** 2025-09-13 00:34:17.958810 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:34:17.958821 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:34:17.958831 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:34:17.958842 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:34:17.958852 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:34:17.958863 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:34:17.958874 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:34:17.958884 | orchestrator | 2025-09-13 00:34:17.958895 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-09-13 00:34:17.958906 | orchestrator | Saturday 13 September 2025 00:34:15 +0000 (0:00:00.452) 0:06:13.775 **** 2025-09-13 00:34:17.958916 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:34:17.958927 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:34:17.958938 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:34:17.958948 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:34:17.958959 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:34:17.958976 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:34:17.958987 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:34:17.958998 | orchestrator | 2025-09-13 00:34:17.959009 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-09-13 00:34:17.959020 | orchestrator | Saturday 13 September 2025 00:34:16 +0000 (0:00:00.434) 0:06:14.210 **** 2025-09-13 00:34:17.959030 | orchestrator | ok: [testbed-manager] 2025-09-13 00:34:17.959049 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:34:38.288195 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:34:38.288396 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:34:38.288420 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:34:38.288437 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:34:38.288453 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:34:38.288469 | orchestrator | 2025-09-13 00:34:38.288487 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-09-13 00:34:38.288505 | orchestrator | Saturday 13 September 2025 00:34:17 +0000 (0:00:01.514) 0:06:15.725 **** 2025-09-13 00:34:38.288522 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:34:38.288541 | orchestrator | 2025-09-13 00:34:38.288557 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-09-13 00:34:38.288573 | orchestrator | Saturday 13 September 2025 00:34:18 +0000 (0:00:00.872) 0:06:16.598 **** 2025-09-13 00:34:38.288589 | orchestrator | ok: [testbed-manager] 2025-09-13 00:34:38.288604 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:34:38.288620 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:34:38.288634 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:34:38.288649 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:34:38.288665 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:34:38.288681 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:34:38.288695 | orchestrator | 2025-09-13 00:34:38.288712 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-09-13 00:34:38.288728 | orchestrator | Saturday 13 September 2025 00:34:19 +0000 (0:00:00.746) 0:06:17.345 **** 2025-09-13 00:34:38.288744 | orchestrator | ok: [testbed-manager] 2025-09-13 00:34:38.288762 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:34:38.288778 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:34:38.288793 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:34:38.288809 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:34:38.288825 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:34:38.288840 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:34:38.288856 | orchestrator | 2025-09-13 00:34:38.288873 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-09-13 00:34:38.288889 | orchestrator | Saturday 13 September 2025 00:34:20 +0000 (0:00:00.734) 0:06:18.079 **** 2025-09-13 00:34:38.288904 | orchestrator | ok: [testbed-manager] 2025-09-13 00:34:38.288919 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:34:38.288955 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:34:38.288974 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:34:38.288991 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:34:38.289008 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:34:38.289026 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:34:38.289042 | orchestrator | 2025-09-13 00:34:38.289058 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-09-13 00:34:38.289075 | orchestrator | Saturday 13 September 2025 00:34:21 +0000 (0:00:01.196) 0:06:19.275 **** 2025-09-13 00:34:38.289092 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:34:38.289108 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:34:38.289123 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:34:38.289139 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:34:38.289155 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:34:38.289172 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:34:38.289248 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:34:38.289268 | orchestrator | 2025-09-13 00:34:38.289284 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-09-13 00:34:38.289301 | orchestrator | Saturday 13 September 2025 00:34:22 +0000 (0:00:01.367) 0:06:20.642 **** 2025-09-13 00:34:38.289318 | orchestrator | ok: [testbed-manager] 2025-09-13 00:34:38.289335 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:34:38.289351 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:34:38.289368 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:34:38.289384 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:34:38.289401 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:34:38.289417 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:34:38.289431 | orchestrator | 2025-09-13 00:34:38.289446 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-09-13 00:34:38.289462 | orchestrator | Saturday 13 September 2025 00:34:24 +0000 (0:00:01.173) 0:06:21.816 **** 2025-09-13 00:34:38.289478 | orchestrator | changed: [testbed-manager] 2025-09-13 00:34:38.289492 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:34:38.289508 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:34:38.289525 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:34:38.289543 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:34:38.289559 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:34:38.289575 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:34:38.289593 | orchestrator | 2025-09-13 00:34:38.289611 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-09-13 00:34:38.289627 | orchestrator | Saturday 13 September 2025 00:34:25 +0000 (0:00:01.234) 0:06:23.051 **** 2025-09-13 00:34:38.289645 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:34:38.289663 | orchestrator | 2025-09-13 00:34:38.289679 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-09-13 00:34:38.289695 | orchestrator | Saturday 13 September 2025 00:34:26 +0000 (0:00:00.859) 0:06:23.910 **** 2025-09-13 00:34:38.289713 | orchestrator | ok: [testbed-manager] 2025-09-13 00:34:38.289729 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:34:38.289744 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:34:38.289759 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:34:38.289774 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:34:38.289788 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:34:38.289803 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:34:38.289818 | orchestrator | 2025-09-13 00:34:38.289833 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-09-13 00:34:38.289848 | orchestrator | Saturday 13 September 2025 00:34:27 +0000 (0:00:01.214) 0:06:25.125 **** 2025-09-13 00:34:38.289864 | orchestrator | ok: [testbed-manager] 2025-09-13 00:34:38.289879 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:34:38.289920 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:34:38.289936 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:34:38.289951 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:34:38.289966 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:34:38.289982 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:34:38.289998 | orchestrator | 2025-09-13 00:34:38.290014 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-09-13 00:34:38.290105 | orchestrator | Saturday 13 September 2025 00:34:28 +0000 (0:00:00.956) 0:06:26.082 **** 2025-09-13 00:34:38.290123 | orchestrator | ok: [testbed-manager] 2025-09-13 00:34:38.290138 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:34:38.290154 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:34:38.290170 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:34:38.290186 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:34:38.290203 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:34:38.290245 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:34:38.290263 | orchestrator | 2025-09-13 00:34:38.290281 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-09-13 00:34:38.290318 | orchestrator | Saturday 13 September 2025 00:34:29 +0000 (0:00:00.994) 0:06:27.076 **** 2025-09-13 00:34:38.290337 | orchestrator | ok: [testbed-manager] 2025-09-13 00:34:38.290352 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:34:38.290369 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:34:38.290385 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:34:38.290401 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:34:38.290418 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:34:38.290434 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:34:38.290450 | orchestrator | 2025-09-13 00:34:38.290467 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-09-13 00:34:38.290483 | orchestrator | Saturday 13 September 2025 00:34:30 +0000 (0:00:01.008) 0:06:28.085 **** 2025-09-13 00:34:38.290499 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:34:38.290516 | orchestrator | 2025-09-13 00:34:38.290533 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-13 00:34:38.290550 | orchestrator | Saturday 13 September 2025 00:34:31 +0000 (0:00:01.028) 0:06:29.114 **** 2025-09-13 00:34:38.290566 | orchestrator | 2025-09-13 00:34:38.290582 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-13 00:34:38.290598 | orchestrator | Saturday 13 September 2025 00:34:31 +0000 (0:00:00.038) 0:06:29.152 **** 2025-09-13 00:34:38.290614 | orchestrator | 2025-09-13 00:34:38.290631 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-13 00:34:38.290647 | orchestrator | Saturday 13 September 2025 00:34:31 +0000 (0:00:00.038) 0:06:29.191 **** 2025-09-13 00:34:38.290663 | orchestrator | 2025-09-13 00:34:38.290680 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-13 00:34:38.290695 | orchestrator | Saturday 13 September 2025 00:34:31 +0000 (0:00:00.060) 0:06:29.251 **** 2025-09-13 00:34:38.290711 | orchestrator | 2025-09-13 00:34:38.290727 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-13 00:34:38.290743 | orchestrator | Saturday 13 September 2025 00:34:31 +0000 (0:00:00.040) 0:06:29.292 **** 2025-09-13 00:34:38.290759 | orchestrator | 2025-09-13 00:34:38.290775 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-13 00:34:38.290791 | orchestrator | Saturday 13 September 2025 00:34:31 +0000 (0:00:00.040) 0:06:29.332 **** 2025-09-13 00:34:38.290807 | orchestrator | 2025-09-13 00:34:38.290822 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-13 00:34:38.290838 | orchestrator | Saturday 13 September 2025 00:34:31 +0000 (0:00:00.047) 0:06:29.380 **** 2025-09-13 00:34:38.290854 | orchestrator | 2025-09-13 00:34:38.290870 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-13 00:34:38.290886 | orchestrator | Saturday 13 September 2025 00:34:31 +0000 (0:00:00.040) 0:06:29.421 **** 2025-09-13 00:34:38.290901 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:34:38.290917 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:34:38.290934 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:34:38.290951 | orchestrator | 2025-09-13 00:34:38.290968 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-09-13 00:34:38.290985 | orchestrator | Saturday 13 September 2025 00:34:32 +0000 (0:00:01.052) 0:06:30.474 **** 2025-09-13 00:34:38.291001 | orchestrator | changed: [testbed-manager] 2025-09-13 00:34:38.291018 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:34:38.291033 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:34:38.291049 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:34:38.291065 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:34:38.291082 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:34:38.291115 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:34:38.291131 | orchestrator | 2025-09-13 00:34:38.291147 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-09-13 00:34:38.291176 | orchestrator | Saturday 13 September 2025 00:34:33 +0000 (0:00:01.242) 0:06:31.716 **** 2025-09-13 00:34:38.291191 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:34:38.291267 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:34:38.291288 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:34:38.291304 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:34:38.291319 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:34:38.291336 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:34:38.291346 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:34:38.291356 | orchestrator | 2025-09-13 00:34:38.291365 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-09-13 00:34:38.291375 | orchestrator | Saturday 13 September 2025 00:34:36 +0000 (0:00:02.488) 0:06:34.205 **** 2025-09-13 00:34:38.291385 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:34:38.291394 | orchestrator | 2025-09-13 00:34:38.291403 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-09-13 00:34:38.291413 | orchestrator | Saturday 13 September 2025 00:34:36 +0000 (0:00:00.100) 0:06:34.305 **** 2025-09-13 00:34:38.291422 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:34:38.291432 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:34:38.291441 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:34:38.291451 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:34:38.291476 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:35:02.915396 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:35:02.915499 | orchestrator | ok: [testbed-manager] 2025-09-13 00:35:02.915515 | orchestrator | 2025-09-13 00:35:02.915526 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-09-13 00:35:02.915538 | orchestrator | Saturday 13 September 2025 00:34:38 +0000 (0:00:01.743) 0:06:36.049 **** 2025-09-13 00:35:02.915549 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:35:02.915559 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:35:02.915569 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:35:02.915578 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:35:02.915588 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:35:02.915598 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:35:02.915607 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:35:02.915617 | orchestrator | 2025-09-13 00:35:02.915627 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-09-13 00:35:02.915637 | orchestrator | Saturday 13 September 2025 00:34:38 +0000 (0:00:00.501) 0:06:36.551 **** 2025-09-13 00:35:02.915648 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:35:02.915660 | orchestrator | 2025-09-13 00:35:02.915670 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-09-13 00:35:02.915680 | orchestrator | Saturday 13 September 2025 00:34:39 +0000 (0:00:01.101) 0:06:37.652 **** 2025-09-13 00:35:02.915690 | orchestrator | ok: [testbed-manager] 2025-09-13 00:35:02.915700 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:35:02.915709 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:35:02.915719 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:35:02.915728 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:35:02.915738 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:35:02.915747 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:35:02.915757 | orchestrator | 2025-09-13 00:35:02.915766 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-09-13 00:35:02.915776 | orchestrator | Saturday 13 September 2025 00:34:40 +0000 (0:00:00.825) 0:06:38.478 **** 2025-09-13 00:35:02.915786 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-09-13 00:35:02.915796 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-09-13 00:35:02.915805 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-09-13 00:35:02.915840 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-09-13 00:35:02.915850 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-09-13 00:35:02.915860 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-09-13 00:35:02.915870 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-09-13 00:35:02.915879 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-09-13 00:35:02.915889 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-09-13 00:35:02.915899 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-09-13 00:35:02.915908 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-09-13 00:35:02.915918 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-09-13 00:35:02.915927 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-09-13 00:35:02.915936 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-09-13 00:35:02.915946 | orchestrator | 2025-09-13 00:35:02.915956 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-09-13 00:35:02.915965 | orchestrator | Saturday 13 September 2025 00:34:43 +0000 (0:00:02.411) 0:06:40.889 **** 2025-09-13 00:35:02.915975 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:35:02.915984 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:35:02.915994 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:35:02.916003 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:35:02.916012 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:35:02.916022 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:35:02.916031 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:35:02.916041 | orchestrator | 2025-09-13 00:35:02.916050 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-09-13 00:35:02.916060 | orchestrator | Saturday 13 September 2025 00:34:43 +0000 (0:00:00.483) 0:06:41.372 **** 2025-09-13 00:35:02.916071 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:35:02.916083 | orchestrator | 2025-09-13 00:35:02.916092 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-09-13 00:35:02.916102 | orchestrator | Saturday 13 September 2025 00:34:44 +0000 (0:00:01.037) 0:06:42.410 **** 2025-09-13 00:35:02.916111 | orchestrator | ok: [testbed-manager] 2025-09-13 00:35:02.916121 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:35:02.916130 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:35:02.916140 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:35:02.916149 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:35:02.916159 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:35:02.916168 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:35:02.916178 | orchestrator | 2025-09-13 00:35:02.916187 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-09-13 00:35:02.916197 | orchestrator | Saturday 13 September 2025 00:34:45 +0000 (0:00:00.793) 0:06:43.204 **** 2025-09-13 00:35:02.916207 | orchestrator | ok: [testbed-manager] 2025-09-13 00:35:02.916216 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:35:02.916226 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:35:02.916235 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:35:02.916245 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:35:02.916254 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:35:02.916263 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:35:02.916290 | orchestrator | 2025-09-13 00:35:02.916300 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-09-13 00:35:02.916327 | orchestrator | Saturday 13 September 2025 00:34:46 +0000 (0:00:00.814) 0:06:44.018 **** 2025-09-13 00:35:02.916338 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:35:02.916348 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:35:02.916357 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:35:02.916381 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:35:02.916391 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:35:02.916400 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:35:02.916409 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:35:02.916419 | orchestrator | 2025-09-13 00:35:02.916429 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-09-13 00:35:02.916439 | orchestrator | Saturday 13 September 2025 00:34:46 +0000 (0:00:00.423) 0:06:44.442 **** 2025-09-13 00:35:02.916448 | orchestrator | ok: [testbed-manager] 2025-09-13 00:35:02.916458 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:35:02.916467 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:35:02.916477 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:35:02.916486 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:35:02.916496 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:35:02.916505 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:35:02.916515 | orchestrator | 2025-09-13 00:35:02.916524 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-09-13 00:35:02.916534 | orchestrator | Saturday 13 September 2025 00:34:48 +0000 (0:00:01.388) 0:06:45.830 **** 2025-09-13 00:35:02.916544 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:35:02.916553 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:35:02.916563 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:35:02.916573 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:35:02.916582 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:35:02.916592 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:35:02.916602 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:35:02.916611 | orchestrator | 2025-09-13 00:35:02.916621 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-09-13 00:35:02.916630 | orchestrator | Saturday 13 September 2025 00:34:48 +0000 (0:00:00.454) 0:06:46.284 **** 2025-09-13 00:35:02.916640 | orchestrator | ok: [testbed-manager] 2025-09-13 00:35:02.916650 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:35:02.916660 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:35:02.916669 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:35:02.916679 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:35:02.916688 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:35:02.916697 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:35:02.916707 | orchestrator | 2025-09-13 00:35:02.916717 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-09-13 00:35:02.916731 | orchestrator | Saturday 13 September 2025 00:34:55 +0000 (0:00:07.302) 0:06:53.586 **** 2025-09-13 00:35:02.916741 | orchestrator | ok: [testbed-manager] 2025-09-13 00:35:02.916751 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:35:02.916760 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:35:02.916770 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:35:02.916779 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:35:02.916789 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:35:02.916799 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:35:02.916808 | orchestrator | 2025-09-13 00:35:02.916818 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-09-13 00:35:02.916828 | orchestrator | Saturday 13 September 2025 00:34:57 +0000 (0:00:01.273) 0:06:54.859 **** 2025-09-13 00:35:02.916837 | orchestrator | ok: [testbed-manager] 2025-09-13 00:35:02.916847 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:35:02.916856 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:35:02.916866 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:35:02.916875 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:35:02.916885 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:35:02.916894 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:35:02.916904 | orchestrator | 2025-09-13 00:35:02.916913 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-09-13 00:35:02.916923 | orchestrator | Saturday 13 September 2025 00:34:58 +0000 (0:00:01.641) 0:06:56.501 **** 2025-09-13 00:35:02.916933 | orchestrator | ok: [testbed-manager] 2025-09-13 00:35:02.916948 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:35:02.916958 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:35:02.916968 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:35:02.916977 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:35:02.916987 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:35:02.916996 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:35:02.917006 | orchestrator | 2025-09-13 00:35:02.917016 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-13 00:35:02.917025 | orchestrator | Saturday 13 September 2025 00:35:00 +0000 (0:00:01.796) 0:06:58.297 **** 2025-09-13 00:35:02.917035 | orchestrator | ok: [testbed-manager] 2025-09-13 00:35:02.917045 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:35:02.917054 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:35:02.917064 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:35:02.917074 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:35:02.917083 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:35:02.917093 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:35:02.917103 | orchestrator | 2025-09-13 00:35:02.917112 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-13 00:35:02.917122 | orchestrator | Saturday 13 September 2025 00:35:01 +0000 (0:00:00.834) 0:06:59.131 **** 2025-09-13 00:35:02.917132 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:35:02.917141 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:35:02.917151 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:35:02.917160 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:35:02.917170 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:35:02.917180 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:35:02.917189 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:35:02.917199 | orchestrator | 2025-09-13 00:35:02.917209 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-09-13 00:35:02.917218 | orchestrator | Saturday 13 September 2025 00:35:02 +0000 (0:00:01.024) 0:07:00.156 **** 2025-09-13 00:35:02.917228 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:35:02.917237 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:35:02.917247 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:35:02.917256 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:35:02.917281 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:35:02.917291 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:35:02.917301 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:35:02.917310 | orchestrator | 2025-09-13 00:35:02.917325 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-09-13 00:35:34.416212 | orchestrator | Saturday 13 September 2025 00:35:02 +0000 (0:00:00.519) 0:07:00.676 **** 2025-09-13 00:35:34.416324 | orchestrator | ok: [testbed-manager] 2025-09-13 00:35:34.416376 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:35:34.416389 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:35:34.416400 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:35:34.416411 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:35:34.416422 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:35:34.416434 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:35:34.416445 | orchestrator | 2025-09-13 00:35:34.416458 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-09-13 00:35:34.416470 | orchestrator | Saturday 13 September 2025 00:35:03 +0000 (0:00:00.491) 0:07:01.168 **** 2025-09-13 00:35:34.416481 | orchestrator | ok: [testbed-manager] 2025-09-13 00:35:34.416492 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:35:34.416503 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:35:34.416514 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:35:34.416525 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:35:34.416536 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:35:34.416546 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:35:34.416557 | orchestrator | 2025-09-13 00:35:34.416568 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-09-13 00:35:34.416579 | orchestrator | Saturday 13 September 2025 00:35:03 +0000 (0:00:00.525) 0:07:01.693 **** 2025-09-13 00:35:34.416613 | orchestrator | ok: [testbed-manager] 2025-09-13 00:35:34.416625 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:35:34.416636 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:35:34.416647 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:35:34.416657 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:35:34.416668 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:35:34.416679 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:35:34.416690 | orchestrator | 2025-09-13 00:35:34.416700 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-09-13 00:35:34.416711 | orchestrator | Saturday 13 September 2025 00:35:04 +0000 (0:00:00.515) 0:07:02.209 **** 2025-09-13 00:35:34.416722 | orchestrator | ok: [testbed-manager] 2025-09-13 00:35:34.416733 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:35:34.416743 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:35:34.416756 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:35:34.416768 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:35:34.416780 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:35:34.416792 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:35:34.416804 | orchestrator | 2025-09-13 00:35:34.416816 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-09-13 00:35:34.416845 | orchestrator | Saturday 13 September 2025 00:35:10 +0000 (0:00:05.664) 0:07:07.873 **** 2025-09-13 00:35:34.416858 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:35:34.416871 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:35:34.416884 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:35:34.416896 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:35:34.416908 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:35:34.416920 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:35:34.416945 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:35:34.416957 | orchestrator | 2025-09-13 00:35:34.416970 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-09-13 00:35:34.416982 | orchestrator | Saturday 13 September 2025 00:35:10 +0000 (0:00:00.537) 0:07:08.410 **** 2025-09-13 00:35:34.416996 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:35:34.417010 | orchestrator | 2025-09-13 00:35:34.417023 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-09-13 00:35:34.417035 | orchestrator | Saturday 13 September 2025 00:35:11 +0000 (0:00:00.808) 0:07:09.219 **** 2025-09-13 00:35:34.417048 | orchestrator | ok: [testbed-manager] 2025-09-13 00:35:34.417059 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:35:34.417072 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:35:34.417085 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:35:34.417097 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:35:34.417108 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:35:34.417119 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:35:34.417130 | orchestrator | 2025-09-13 00:35:34.417141 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-09-13 00:35:34.417152 | orchestrator | Saturday 13 September 2025 00:35:13 +0000 (0:00:01.964) 0:07:11.184 **** 2025-09-13 00:35:34.417163 | orchestrator | ok: [testbed-manager] 2025-09-13 00:35:34.417173 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:35:34.417184 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:35:34.417194 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:35:34.417205 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:35:34.417216 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:35:34.417226 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:35:34.417237 | orchestrator | 2025-09-13 00:35:34.417248 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-09-13 00:35:34.417259 | orchestrator | Saturday 13 September 2025 00:35:14 +0000 (0:00:01.101) 0:07:12.285 **** 2025-09-13 00:35:34.417269 | orchestrator | ok: [testbed-manager] 2025-09-13 00:35:34.417280 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:35:34.417299 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:35:34.417310 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:35:34.417320 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:35:34.417331 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:35:34.417412 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:35:34.417423 | orchestrator | 2025-09-13 00:35:34.417434 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-09-13 00:35:34.417445 | orchestrator | Saturday 13 September 2025 00:35:15 +0000 (0:00:00.801) 0:07:13.087 **** 2025-09-13 00:35:34.417456 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-13 00:35:34.417469 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-13 00:35:34.417481 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-13 00:35:34.417509 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-13 00:35:34.417521 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-13 00:35:34.417532 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-13 00:35:34.417543 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-13 00:35:34.417554 | orchestrator | 2025-09-13 00:35:34.417565 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-09-13 00:35:34.417577 | orchestrator | Saturday 13 September 2025 00:35:16 +0000 (0:00:01.642) 0:07:14.729 **** 2025-09-13 00:35:34.417588 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:35:34.417599 | orchestrator | 2025-09-13 00:35:34.417610 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-09-13 00:35:34.417621 | orchestrator | Saturday 13 September 2025 00:35:17 +0000 (0:00:00.989) 0:07:15.718 **** 2025-09-13 00:35:34.417632 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:35:34.417643 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:35:34.417654 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:35:34.417665 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:35:34.417676 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:35:34.417686 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:35:34.417697 | orchestrator | changed: [testbed-manager] 2025-09-13 00:35:34.417708 | orchestrator | 2025-09-13 00:35:34.417719 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-09-13 00:35:34.417730 | orchestrator | Saturday 13 September 2025 00:35:26 +0000 (0:00:08.798) 0:07:24.517 **** 2025-09-13 00:35:34.417741 | orchestrator | ok: [testbed-manager] 2025-09-13 00:35:34.417757 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:35:34.417769 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:35:34.417779 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:35:34.417790 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:35:34.417801 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:35:34.417812 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:35:34.417822 | orchestrator | 2025-09-13 00:35:34.417834 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-09-13 00:35:34.417845 | orchestrator | Saturday 13 September 2025 00:35:28 +0000 (0:00:01.839) 0:07:26.356 **** 2025-09-13 00:35:34.417856 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:35:34.417866 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:35:34.417885 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:35:34.417896 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:35:34.417907 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:35:34.417917 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:35:34.417928 | orchestrator | 2025-09-13 00:35:34.417939 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-09-13 00:35:34.417950 | orchestrator | Saturday 13 September 2025 00:35:29 +0000 (0:00:01.263) 0:07:27.620 **** 2025-09-13 00:35:34.417961 | orchestrator | changed: [testbed-manager] 2025-09-13 00:35:34.417972 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:35:34.417983 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:35:34.417994 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:35:34.418004 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:35:34.418066 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:35:34.418078 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:35:34.418089 | orchestrator | 2025-09-13 00:35:34.418100 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-09-13 00:35:34.418111 | orchestrator | 2025-09-13 00:35:34.418122 | orchestrator | TASK [Include hardening role] ************************************************** 2025-09-13 00:35:34.418133 | orchestrator | Saturday 13 September 2025 00:35:31 +0000 (0:00:01.156) 0:07:28.776 **** 2025-09-13 00:35:34.418144 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:35:34.418155 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:35:34.418165 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:35:34.418176 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:35:34.418187 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:35:34.418198 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:35:34.418209 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:35:34.418220 | orchestrator | 2025-09-13 00:35:34.418231 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-09-13 00:35:34.418242 | orchestrator | 2025-09-13 00:35:34.418253 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-09-13 00:35:34.418264 | orchestrator | Saturday 13 September 2025 00:35:31 +0000 (0:00:00.494) 0:07:29.270 **** 2025-09-13 00:35:34.418275 | orchestrator | changed: [testbed-manager] 2025-09-13 00:35:34.418285 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:35:34.418296 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:35:34.418307 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:35:34.418318 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:35:34.418329 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:35:34.418358 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:35:34.418369 | orchestrator | 2025-09-13 00:35:34.418380 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-09-13 00:35:34.418391 | orchestrator | Saturday 13 September 2025 00:35:32 +0000 (0:00:01.272) 0:07:30.542 **** 2025-09-13 00:35:34.418402 | orchestrator | ok: [testbed-manager] 2025-09-13 00:35:34.418413 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:35:34.418424 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:35:34.418435 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:35:34.418446 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:35:34.418456 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:35:34.418467 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:35:34.418478 | orchestrator | 2025-09-13 00:35:34.418489 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-09-13 00:35:34.418506 | orchestrator | Saturday 13 September 2025 00:35:34 +0000 (0:00:01.634) 0:07:32.177 **** 2025-09-13 00:35:56.514890 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:35:56.514995 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:35:56.515009 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:35:56.515021 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:35:56.515033 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:35:56.515044 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:35:56.515055 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:35:56.515066 | orchestrator | 2025-09-13 00:35:56.515100 | orchestrator | TASK [Include smartd role] ***************************************************** 2025-09-13 00:35:56.515113 | orchestrator | Saturday 13 September 2025 00:35:34 +0000 (0:00:00.489) 0:07:32.667 **** 2025-09-13 00:35:56.515124 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:35:56.515136 | orchestrator | 2025-09-13 00:35:56.515147 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-09-13 00:35:56.515158 | orchestrator | Saturday 13 September 2025 00:35:35 +0000 (0:00:00.951) 0:07:33.618 **** 2025-09-13 00:35:56.515170 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:35:56.515183 | orchestrator | 2025-09-13 00:35:56.515195 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-09-13 00:35:56.515206 | orchestrator | Saturday 13 September 2025 00:35:36 +0000 (0:00:00.789) 0:07:34.408 **** 2025-09-13 00:35:56.515217 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:35:56.515228 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:35:56.515238 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:35:56.515249 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:35:56.515260 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:35:56.515270 | orchestrator | changed: [testbed-manager] 2025-09-13 00:35:56.515281 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:35:56.515291 | orchestrator | 2025-09-13 00:35:56.515302 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-09-13 00:35:56.515314 | orchestrator | Saturday 13 September 2025 00:35:44 +0000 (0:00:07.991) 0:07:42.400 **** 2025-09-13 00:35:56.515324 | orchestrator | changed: [testbed-manager] 2025-09-13 00:35:56.515335 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:35:56.515346 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:35:56.515356 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:35:56.515367 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:35:56.515406 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:35:56.515417 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:35:56.515428 | orchestrator | 2025-09-13 00:35:56.515442 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-09-13 00:35:56.515454 | orchestrator | Saturday 13 September 2025 00:35:45 +0000 (0:00:00.805) 0:07:43.205 **** 2025-09-13 00:35:56.515466 | orchestrator | changed: [testbed-manager] 2025-09-13 00:35:56.515479 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:35:56.515491 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:35:56.515503 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:35:56.515515 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:35:56.515526 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:35:56.515538 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:35:56.515551 | orchestrator | 2025-09-13 00:35:56.515563 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-09-13 00:35:56.515575 | orchestrator | Saturday 13 September 2025 00:35:46 +0000 (0:00:01.505) 0:07:44.711 **** 2025-09-13 00:35:56.515587 | orchestrator | changed: [testbed-manager] 2025-09-13 00:35:56.515599 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:35:56.515612 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:35:56.515624 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:35:56.515636 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:35:56.515648 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:35:56.515660 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:35:56.515672 | orchestrator | 2025-09-13 00:35:56.515684 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-09-13 00:35:56.515696 | orchestrator | Saturday 13 September 2025 00:35:48 +0000 (0:00:01.707) 0:07:46.419 **** 2025-09-13 00:35:56.515708 | orchestrator | changed: [testbed-manager] 2025-09-13 00:35:56.515729 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:35:56.515740 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:35:56.515752 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:35:56.515764 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:35:56.515776 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:35:56.515788 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:35:56.515799 | orchestrator | 2025-09-13 00:35:56.515810 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-09-13 00:35:56.515820 | orchestrator | Saturday 13 September 2025 00:35:49 +0000 (0:00:01.206) 0:07:47.625 **** 2025-09-13 00:35:56.515831 | orchestrator | changed: [testbed-manager] 2025-09-13 00:35:56.515842 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:35:56.515852 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:35:56.515863 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:35:56.515874 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:35:56.515885 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:35:56.515895 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:35:56.515906 | orchestrator | 2025-09-13 00:35:56.515917 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-09-13 00:35:56.515927 | orchestrator | 2025-09-13 00:35:56.515938 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-09-13 00:35:56.515994 | orchestrator | Saturday 13 September 2025 00:35:51 +0000 (0:00:01.264) 0:07:48.890 **** 2025-09-13 00:35:56.516008 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:35:56.516019 | orchestrator | 2025-09-13 00:35:56.516030 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-13 00:35:56.516057 | orchestrator | Saturday 13 September 2025 00:35:51 +0000 (0:00:00.819) 0:07:49.709 **** 2025-09-13 00:35:56.516069 | orchestrator | ok: [testbed-manager] 2025-09-13 00:35:56.516081 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:35:56.516092 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:35:56.516103 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:35:56.516113 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:35:56.516124 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:35:56.516135 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:35:56.516146 | orchestrator | 2025-09-13 00:35:56.516156 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-13 00:35:56.516167 | orchestrator | Saturday 13 September 2025 00:35:52 +0000 (0:00:00.793) 0:07:50.502 **** 2025-09-13 00:35:56.516178 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:35:56.516189 | orchestrator | changed: [testbed-manager] 2025-09-13 00:35:56.516200 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:35:56.516211 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:35:56.516221 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:35:56.516232 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:35:56.516243 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:35:56.516253 | orchestrator | 2025-09-13 00:35:56.516264 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-09-13 00:35:56.516275 | orchestrator | Saturday 13 September 2025 00:35:54 +0000 (0:00:01.287) 0:07:51.790 **** 2025-09-13 00:35:56.516286 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:35:56.516297 | orchestrator | 2025-09-13 00:35:56.516308 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-13 00:35:56.516319 | orchestrator | Saturday 13 September 2025 00:35:54 +0000 (0:00:00.710) 0:07:52.500 **** 2025-09-13 00:35:56.516329 | orchestrator | ok: [testbed-manager] 2025-09-13 00:35:56.516340 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:35:56.516351 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:35:56.516361 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:35:56.516389 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:35:56.516409 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:35:56.516419 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:35:56.516430 | orchestrator | 2025-09-13 00:35:56.516441 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-13 00:35:56.516452 | orchestrator | Saturday 13 September 2025 00:35:55 +0000 (0:00:00.708) 0:07:53.209 **** 2025-09-13 00:35:56.516468 | orchestrator | changed: [testbed-manager] 2025-09-13 00:35:56.516479 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:35:56.516490 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:35:56.516501 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:35:56.516511 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:35:56.516522 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:35:56.516532 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:35:56.516543 | orchestrator | 2025-09-13 00:35:56.516554 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:35:56.516566 | orchestrator | testbed-manager : ok=163  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-09-13 00:35:56.516577 | orchestrator | testbed-node-0 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-13 00:35:56.516589 | orchestrator | testbed-node-1 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-13 00:35:56.516600 | orchestrator | testbed-node-2 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-13 00:35:56.516611 | orchestrator | testbed-node-3 : ok=170  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-09-13 00:35:56.516621 | orchestrator | testbed-node-4 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-13 00:35:56.516632 | orchestrator | testbed-node-5 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-13 00:35:56.516643 | orchestrator | 2025-09-13 00:35:56.516654 | orchestrator | 2025-09-13 00:35:56.516665 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:35:56.516675 | orchestrator | Saturday 13 September 2025 00:35:56 +0000 (0:00:01.057) 0:07:54.267 **** 2025-09-13 00:35:56.516687 | orchestrator | =============================================================================== 2025-09-13 00:35:56.516697 | orchestrator | osism.commons.packages : Install required packages --------------------- 77.68s 2025-09-13 00:35:56.516708 | orchestrator | osism.commons.packages : Download required packages -------------------- 39.37s 2025-09-13 00:35:56.516719 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.32s 2025-09-13 00:35:56.516729 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.64s 2025-09-13 00:35:56.516740 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.09s 2025-09-13 00:35:56.516751 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.90s 2025-09-13 00:35:56.516761 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 10.86s 2025-09-13 00:35:56.516773 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.66s 2025-09-13 00:35:56.516783 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.80s 2025-09-13 00:35:56.516794 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.54s 2025-09-13 00:35:56.516811 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 7.99s 2025-09-13 00:35:56.779297 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.71s 2025-09-13 00:35:56.779441 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.64s 2025-09-13 00:35:56.779482 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.53s 2025-09-13 00:35:56.779493 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.44s 2025-09-13 00:35:56.779504 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.30s 2025-09-13 00:35:56.779515 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.11s 2025-09-13 00:35:56.779526 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.87s 2025-09-13 00:35:56.779537 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.79s 2025-09-13 00:35:56.779548 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.75s 2025-09-13 00:35:56.955426 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-09-13 00:35:56.955498 | orchestrator | + osism apply network 2025-09-13 00:36:09.359208 | orchestrator | 2025-09-13 00:36:09 | INFO  | Task d04b4374-b022-4456-9677-0f6c15e2b1bf (network) was prepared for execution. 2025-09-13 00:36:09.359315 | orchestrator | 2025-09-13 00:36:09 | INFO  | It takes a moment until task d04b4374-b022-4456-9677-0f6c15e2b1bf (network) has been started and output is visible here. 2025-09-13 00:36:37.806808 | orchestrator | 2025-09-13 00:36:37.806918 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-09-13 00:36:37.806935 | orchestrator | 2025-09-13 00:36:37.806948 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-09-13 00:36:37.806959 | orchestrator | Saturday 13 September 2025 00:36:13 +0000 (0:00:00.300) 0:00:00.300 **** 2025-09-13 00:36:37.806971 | orchestrator | ok: [testbed-manager] 2025-09-13 00:36:37.806983 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:36:37.806994 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:36:37.807006 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:36:37.807017 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:36:37.807028 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:36:37.807039 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:36:37.807050 | orchestrator | 2025-09-13 00:36:37.807061 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-09-13 00:36:37.807072 | orchestrator | Saturday 13 September 2025 00:36:14 +0000 (0:00:00.707) 0:00:01.008 **** 2025-09-13 00:36:37.807085 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:36:37.807099 | orchestrator | 2025-09-13 00:36:37.807110 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-09-13 00:36:37.807121 | orchestrator | Saturday 13 September 2025 00:36:15 +0000 (0:00:01.240) 0:00:02.248 **** 2025-09-13 00:36:37.807132 | orchestrator | ok: [testbed-manager] 2025-09-13 00:36:37.807142 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:36:37.807153 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:36:37.807164 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:36:37.807174 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:36:37.807185 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:36:37.807196 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:36:37.807207 | orchestrator | 2025-09-13 00:36:37.807218 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-09-13 00:36:37.807229 | orchestrator | Saturday 13 September 2025 00:36:17 +0000 (0:00:01.959) 0:00:04.208 **** 2025-09-13 00:36:37.807239 | orchestrator | ok: [testbed-manager] 2025-09-13 00:36:37.807250 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:36:37.807261 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:36:37.807272 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:36:37.807282 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:36:37.807293 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:36:37.807304 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:36:37.807315 | orchestrator | 2025-09-13 00:36:37.807326 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-09-13 00:36:37.807363 | orchestrator | Saturday 13 September 2025 00:36:19 +0000 (0:00:01.833) 0:00:06.042 **** 2025-09-13 00:36:37.807377 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-09-13 00:36:37.807389 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-09-13 00:36:37.807401 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-09-13 00:36:37.807413 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-09-13 00:36:37.807451 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-09-13 00:36:37.807464 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-09-13 00:36:37.807476 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-09-13 00:36:37.807488 | orchestrator | 2025-09-13 00:36:37.807501 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-09-13 00:36:37.807513 | orchestrator | Saturday 13 September 2025 00:36:20 +0000 (0:00:00.945) 0:00:06.988 **** 2025-09-13 00:36:37.807525 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-13 00:36:37.807538 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-13 00:36:37.807550 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-13 00:36:37.807562 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-13 00:36:37.807575 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-13 00:36:37.807587 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-13 00:36:37.807599 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-13 00:36:37.807611 | orchestrator | 2025-09-13 00:36:37.807623 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-09-13 00:36:37.807635 | orchestrator | Saturday 13 September 2025 00:36:23 +0000 (0:00:03.521) 0:00:10.510 **** 2025-09-13 00:36:37.807647 | orchestrator | changed: [testbed-manager] 2025-09-13 00:36:37.807660 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:36:37.807672 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:36:37.807684 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:36:37.807695 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:36:37.807706 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:36:37.807717 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:36:37.807727 | orchestrator | 2025-09-13 00:36:37.807738 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-09-13 00:36:37.807749 | orchestrator | Saturday 13 September 2025 00:36:25 +0000 (0:00:01.430) 0:00:11.940 **** 2025-09-13 00:36:37.807760 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-13 00:36:37.807770 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-13 00:36:37.807781 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-13 00:36:37.807791 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-13 00:36:37.807802 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-13 00:36:37.807812 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-13 00:36:37.807823 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-13 00:36:37.807834 | orchestrator | 2025-09-13 00:36:37.807844 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-09-13 00:36:37.807855 | orchestrator | Saturday 13 September 2025 00:36:27 +0000 (0:00:01.788) 0:00:13.729 **** 2025-09-13 00:36:37.807866 | orchestrator | ok: [testbed-manager] 2025-09-13 00:36:37.807877 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:36:37.807888 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:36:37.807898 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:36:37.807909 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:36:37.807920 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:36:37.807930 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:36:37.807941 | orchestrator | 2025-09-13 00:36:37.807952 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-09-13 00:36:37.807979 | orchestrator | Saturday 13 September 2025 00:36:28 +0000 (0:00:01.066) 0:00:14.795 **** 2025-09-13 00:36:37.807991 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:36:37.808002 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:36:37.808013 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:36:37.808033 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:36:37.808044 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:36:37.808055 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:36:37.808065 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:36:37.808076 | orchestrator | 2025-09-13 00:36:37.808087 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-09-13 00:36:37.808112 | orchestrator | Saturday 13 September 2025 00:36:28 +0000 (0:00:00.658) 0:00:15.454 **** 2025-09-13 00:36:37.808124 | orchestrator | ok: [testbed-manager] 2025-09-13 00:36:37.808134 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:36:37.808145 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:36:37.808156 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:36:37.808166 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:36:37.808177 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:36:37.808187 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:36:37.808198 | orchestrator | 2025-09-13 00:36:37.808209 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-09-13 00:36:37.808220 | orchestrator | Saturday 13 September 2025 00:36:30 +0000 (0:00:02.136) 0:00:17.590 **** 2025-09-13 00:36:37.808231 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:36:37.808241 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:36:37.808252 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:36:37.808263 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:36:37.808274 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:36:37.808284 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:36:37.808296 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-09-13 00:36:37.808308 | orchestrator | 2025-09-13 00:36:37.808319 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-09-13 00:36:37.808330 | orchestrator | Saturday 13 September 2025 00:36:31 +0000 (0:00:00.890) 0:00:18.481 **** 2025-09-13 00:36:37.808341 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:36:37.808351 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:36:37.808362 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:36:37.808373 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:36:37.808384 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:36:37.808394 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:36:37.808405 | orchestrator | ok: [testbed-manager] 2025-09-13 00:36:37.808415 | orchestrator | 2025-09-13 00:36:37.808445 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-09-13 00:36:37.808457 | orchestrator | Saturday 13 September 2025 00:36:33 +0000 (0:00:02.041) 0:00:20.522 **** 2025-09-13 00:36:37.808468 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:36:37.808481 | orchestrator | 2025-09-13 00:36:37.808492 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-13 00:36:37.808502 | orchestrator | Saturday 13 September 2025 00:36:35 +0000 (0:00:01.230) 0:00:21.753 **** 2025-09-13 00:36:37.808513 | orchestrator | ok: [testbed-manager] 2025-09-13 00:36:37.808524 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:36:37.808535 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:36:37.808545 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:36:37.808556 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:36:37.808566 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:36:37.808577 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:36:37.808588 | orchestrator | 2025-09-13 00:36:37.808598 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-09-13 00:36:37.808609 | orchestrator | Saturday 13 September 2025 00:36:36 +0000 (0:00:00.887) 0:00:22.641 **** 2025-09-13 00:36:37.808620 | orchestrator | ok: [testbed-manager] 2025-09-13 00:36:37.808631 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:36:37.808642 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:36:37.808660 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:36:37.808671 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:36:37.808681 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:36:37.808692 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:36:37.808702 | orchestrator | 2025-09-13 00:36:37.808713 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-13 00:36:37.808724 | orchestrator | Saturday 13 September 2025 00:36:36 +0000 (0:00:00.723) 0:00:23.364 **** 2025-09-13 00:36:37.808735 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-09-13 00:36:37.808745 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-09-13 00:36:37.808756 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-09-13 00:36:37.808767 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-09-13 00:36:37.808777 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-13 00:36:37.808788 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-09-13 00:36:37.808798 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-13 00:36:37.808809 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-09-13 00:36:37.808820 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-09-13 00:36:37.808830 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-13 00:36:37.808841 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-13 00:36:37.808852 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-13 00:36:37.808862 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-13 00:36:37.808873 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-13 00:36:37.808884 | orchestrator | 2025-09-13 00:36:37.808902 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-09-13 00:36:53.044746 | orchestrator | Saturday 13 September 2025 00:36:37 +0000 (0:00:01.036) 0:00:24.401 **** 2025-09-13 00:36:53.044868 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:36:53.044885 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:36:53.044897 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:36:53.044909 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:36:53.044919 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:36:53.044930 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:36:53.044943 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:36:53.044954 | orchestrator | 2025-09-13 00:36:53.044982 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-09-13 00:36:53.044994 | orchestrator | Saturday 13 September 2025 00:36:38 +0000 (0:00:00.556) 0:00:24.958 **** 2025-09-13 00:36:53.045007 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-node-2, testbed-node-1, testbed-manager, testbed-node-4, testbed-node-3, testbed-node-5 2025-09-13 00:36:53.045021 | orchestrator | 2025-09-13 00:36:53.045032 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-09-13 00:36:53.045043 | orchestrator | Saturday 13 September 2025 00:36:42 +0000 (0:00:04.032) 0:00:28.990 **** 2025-09-13 00:36:53.045056 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-13 00:36:53.045070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-13 00:36:53.045082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-13 00:36:53.045118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-13 00:36:53.045130 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-13 00:36:53.045142 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-13 00:36:53.045153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-13 00:36:53.045164 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-13 00:36:53.045175 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-13 00:36:53.045186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-13 00:36:53.045204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-13 00:36:53.045232 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-13 00:36:53.045245 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-13 00:36:53.045261 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-13 00:36:53.045273 | orchestrator | 2025-09-13 00:36:53.045285 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-09-13 00:36:53.045298 | orchestrator | Saturday 13 September 2025 00:36:47 +0000 (0:00:05.333) 0:00:34.323 **** 2025-09-13 00:36:53.045311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-13 00:36:53.045333 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-13 00:36:53.045347 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-13 00:36:53.045360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-13 00:36:53.045373 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-13 00:36:53.045386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-13 00:36:53.045399 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-13 00:36:53.045412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-13 00:36:53.045426 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-13 00:36:53.045463 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-13 00:36:53.045477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-13 00:36:53.045490 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-13 00:36:53.045511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-13 00:36:58.494375 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-13 00:36:58.494530 | orchestrator | 2025-09-13 00:36:58.494547 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-09-13 00:36:58.494561 | orchestrator | Saturday 13 September 2025 00:36:53 +0000 (0:00:05.313) 0:00:39.636 **** 2025-09-13 00:36:58.494598 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:36:58.494611 | orchestrator | 2025-09-13 00:36:58.494622 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-13 00:36:58.494633 | orchestrator | Saturday 13 September 2025 00:36:54 +0000 (0:00:01.172) 0:00:40.809 **** 2025-09-13 00:36:58.494644 | orchestrator | ok: [testbed-manager] 2025-09-13 00:36:58.494656 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:36:58.494667 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:36:58.494677 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:36:58.494688 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:36:58.494699 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:36:58.494709 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:36:58.494720 | orchestrator | 2025-09-13 00:36:58.494731 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-13 00:36:58.494743 | orchestrator | Saturday 13 September 2025 00:36:55 +0000 (0:00:01.115) 0:00:41.925 **** 2025-09-13 00:36:58.494753 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-13 00:36:58.494765 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-13 00:36:58.494776 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-13 00:36:58.494787 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-13 00:36:58.494797 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-13 00:36:58.494825 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-13 00:36:58.494837 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-13 00:36:58.494848 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-13 00:36:58.494859 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:36:58.494870 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-13 00:36:58.494881 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-13 00:36:58.494891 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-13 00:36:58.494902 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-13 00:36:58.494913 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:36:58.494923 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-13 00:36:58.494934 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-13 00:36:58.494945 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-13 00:36:58.494955 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-13 00:36:58.494966 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:36:58.494977 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-13 00:36:58.494987 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-13 00:36:58.494998 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-13 00:36:58.495009 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-13 00:36:58.495019 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:36:58.495030 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-13 00:36:58.495041 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-13 00:36:58.495051 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-13 00:36:58.495069 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-13 00:36:58.495081 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:36:58.495091 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:36:58.495102 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-13 00:36:58.495113 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-13 00:36:58.495123 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-13 00:36:58.495134 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-13 00:36:58.495145 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:36:58.495155 | orchestrator | 2025-09-13 00:36:58.495166 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-09-13 00:36:58.495196 | orchestrator | Saturday 13 September 2025 00:36:57 +0000 (0:00:01.776) 0:00:43.702 **** 2025-09-13 00:36:58.495207 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:36:58.495218 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:36:58.495229 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:36:58.495239 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:36:58.495250 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:36:58.495266 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:36:58.495277 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:36:58.495287 | orchestrator | 2025-09-13 00:36:58.495298 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-09-13 00:36:58.495309 | orchestrator | Saturday 13 September 2025 00:36:57 +0000 (0:00:00.561) 0:00:44.263 **** 2025-09-13 00:36:58.495319 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:36:58.495330 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:36:58.495340 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:36:58.495351 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:36:58.495362 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:36:58.495372 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:36:58.495383 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:36:58.495393 | orchestrator | 2025-09-13 00:36:58.495404 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:36:58.495416 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-13 00:36:58.495429 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-13 00:36:58.495440 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-13 00:36:58.495475 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-13 00:36:58.495487 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-13 00:36:58.495497 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-13 00:36:58.495508 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-13 00:36:58.495518 | orchestrator | 2025-09-13 00:36:58.495529 | orchestrator | 2025-09-13 00:36:58.495540 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:36:58.495551 | orchestrator | Saturday 13 September 2025 00:36:58 +0000 (0:00:00.600) 0:00:44.864 **** 2025-09-13 00:36:58.495561 | orchestrator | =============================================================================== 2025-09-13 00:36:58.495580 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.33s 2025-09-13 00:36:58.495591 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.31s 2025-09-13 00:36:58.495602 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.03s 2025-09-13 00:36:58.495613 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.52s 2025-09-13 00:36:58.495623 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.14s 2025-09-13 00:36:58.495634 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 2.04s 2025-09-13 00:36:58.495644 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.96s 2025-09-13 00:36:58.495655 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.83s 2025-09-13 00:36:58.495666 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.79s 2025-09-13 00:36:58.495676 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.78s 2025-09-13 00:36:58.495687 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.43s 2025-09-13 00:36:58.495698 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.24s 2025-09-13 00:36:58.495708 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.23s 2025-09-13 00:36:58.495719 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.17s 2025-09-13 00:36:58.495729 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.12s 2025-09-13 00:36:58.495740 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.07s 2025-09-13 00:36:58.495751 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.04s 2025-09-13 00:36:58.495761 | orchestrator | osism.commons.network : Create required directories --------------------- 0.95s 2025-09-13 00:36:58.495772 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.89s 2025-09-13 00:36:58.495783 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.89s 2025-09-13 00:36:58.739717 | orchestrator | + osism apply wireguard 2025-09-13 00:37:10.792128 | orchestrator | 2025-09-13 00:37:10 | INFO  | Task 85c38a8f-016f-4af9-8dbc-72b23068d15e (wireguard) was prepared for execution. 2025-09-13 00:37:10.792243 | orchestrator | 2025-09-13 00:37:10 | INFO  | It takes a moment until task 85c38a8f-016f-4af9-8dbc-72b23068d15e (wireguard) has been started and output is visible here. 2025-09-13 00:37:28.608411 | orchestrator | 2025-09-13 00:37:28.608564 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-09-13 00:37:28.608582 | orchestrator | 2025-09-13 00:37:28.608594 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-09-13 00:37:28.608626 | orchestrator | Saturday 13 September 2025 00:37:14 +0000 (0:00:00.209) 0:00:00.209 **** 2025-09-13 00:37:28.608638 | orchestrator | ok: [testbed-manager] 2025-09-13 00:37:28.608650 | orchestrator | 2025-09-13 00:37:28.608661 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-09-13 00:37:28.608672 | orchestrator | Saturday 13 September 2025 00:37:15 +0000 (0:00:01.250) 0:00:01.459 **** 2025-09-13 00:37:28.608683 | orchestrator | changed: [testbed-manager] 2025-09-13 00:37:28.608695 | orchestrator | 2025-09-13 00:37:28.608706 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-09-13 00:37:28.608717 | orchestrator | Saturday 13 September 2025 00:37:21 +0000 (0:00:05.910) 0:00:07.370 **** 2025-09-13 00:37:28.608728 | orchestrator | changed: [testbed-manager] 2025-09-13 00:37:28.608739 | orchestrator | 2025-09-13 00:37:28.608750 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-09-13 00:37:28.608760 | orchestrator | Saturday 13 September 2025 00:37:22 +0000 (0:00:00.489) 0:00:07.860 **** 2025-09-13 00:37:28.608771 | orchestrator | changed: [testbed-manager] 2025-09-13 00:37:28.608807 | orchestrator | 2025-09-13 00:37:28.608819 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-09-13 00:37:28.608831 | orchestrator | Saturday 13 September 2025 00:37:22 +0000 (0:00:00.378) 0:00:08.238 **** 2025-09-13 00:37:28.608842 | orchestrator | ok: [testbed-manager] 2025-09-13 00:37:28.608853 | orchestrator | 2025-09-13 00:37:28.608863 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-09-13 00:37:28.608874 | orchestrator | Saturday 13 September 2025 00:37:22 +0000 (0:00:00.479) 0:00:08.718 **** 2025-09-13 00:37:28.608885 | orchestrator | ok: [testbed-manager] 2025-09-13 00:37:28.608896 | orchestrator | 2025-09-13 00:37:28.608906 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-09-13 00:37:28.608917 | orchestrator | Saturday 13 September 2025 00:37:23 +0000 (0:00:00.466) 0:00:09.185 **** 2025-09-13 00:37:28.608928 | orchestrator | ok: [testbed-manager] 2025-09-13 00:37:28.608939 | orchestrator | 2025-09-13 00:37:28.608949 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-09-13 00:37:28.608962 | orchestrator | Saturday 13 September 2025 00:37:23 +0000 (0:00:00.409) 0:00:09.594 **** 2025-09-13 00:37:28.608975 | orchestrator | changed: [testbed-manager] 2025-09-13 00:37:28.608987 | orchestrator | 2025-09-13 00:37:28.608999 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-09-13 00:37:28.609011 | orchestrator | Saturday 13 September 2025 00:37:24 +0000 (0:00:01.063) 0:00:10.658 **** 2025-09-13 00:37:28.609024 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-13 00:37:28.609037 | orchestrator | changed: [testbed-manager] 2025-09-13 00:37:28.609049 | orchestrator | 2025-09-13 00:37:28.609060 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-09-13 00:37:28.609074 | orchestrator | Saturday 13 September 2025 00:37:25 +0000 (0:00:00.829) 0:00:11.488 **** 2025-09-13 00:37:28.609086 | orchestrator | changed: [testbed-manager] 2025-09-13 00:37:28.609099 | orchestrator | 2025-09-13 00:37:28.609111 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-09-13 00:37:28.609123 | orchestrator | Saturday 13 September 2025 00:37:27 +0000 (0:00:01.591) 0:00:13.079 **** 2025-09-13 00:37:28.609136 | orchestrator | changed: [testbed-manager] 2025-09-13 00:37:28.609148 | orchestrator | 2025-09-13 00:37:28.609160 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:37:28.609173 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:37:28.609186 | orchestrator | 2025-09-13 00:37:28.609198 | orchestrator | 2025-09-13 00:37:28.609211 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:37:28.609223 | orchestrator | Saturday 13 September 2025 00:37:28 +0000 (0:00:00.975) 0:00:14.055 **** 2025-09-13 00:37:28.609236 | orchestrator | =============================================================================== 2025-09-13 00:37:28.609248 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.91s 2025-09-13 00:37:28.609260 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.59s 2025-09-13 00:37:28.609272 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.25s 2025-09-13 00:37:28.609284 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.06s 2025-09-13 00:37:28.609297 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.98s 2025-09-13 00:37:28.609309 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.83s 2025-09-13 00:37:28.609321 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.49s 2025-09-13 00:37:28.609332 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.48s 2025-09-13 00:37:28.609342 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.47s 2025-09-13 00:37:28.609353 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.41s 2025-09-13 00:37:28.609373 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.38s 2025-09-13 00:37:28.891530 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-09-13 00:37:28.919803 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-09-13 00:37:28.919849 | orchestrator | Dload Upload Total Spent Left Speed 2025-09-13 00:37:29.003211 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 181 0 --:--:-- --:--:-- --:--:-- 182 2025-09-13 00:37:29.016000 | orchestrator | + osism apply --environment custom workarounds 2025-09-13 00:37:30.708062 | orchestrator | 2025-09-13 00:37:30 | INFO  | Trying to run play workarounds in environment custom 2025-09-13 00:37:40.823924 | orchestrator | 2025-09-13 00:37:40 | INFO  | Task 2a975aa9-6b8e-4eaa-9cf4-cd6d2744cc2d (workarounds) was prepared for execution. 2025-09-13 00:37:40.824038 | orchestrator | 2025-09-13 00:37:40 | INFO  | It takes a moment until task 2a975aa9-6b8e-4eaa-9cf4-cd6d2744cc2d (workarounds) has been started and output is visible here. 2025-09-13 00:38:05.023845 | orchestrator | 2025-09-13 00:38:05.023952 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-13 00:38:05.023968 | orchestrator | 2025-09-13 00:38:05.023980 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-09-13 00:38:05.023992 | orchestrator | Saturday 13 September 2025 00:37:44 +0000 (0:00:00.152) 0:00:00.152 **** 2025-09-13 00:38:05.024004 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-09-13 00:38:05.024015 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-09-13 00:38:05.024026 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-09-13 00:38:05.024037 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-09-13 00:38:05.024048 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-09-13 00:38:05.024059 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-09-13 00:38:05.024069 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-09-13 00:38:05.024080 | orchestrator | 2025-09-13 00:38:05.024091 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-09-13 00:38:05.024102 | orchestrator | 2025-09-13 00:38:05.024113 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-13 00:38:05.024124 | orchestrator | Saturday 13 September 2025 00:37:45 +0000 (0:00:00.762) 0:00:00.914 **** 2025-09-13 00:38:05.024135 | orchestrator | ok: [testbed-manager] 2025-09-13 00:38:05.024147 | orchestrator | 2025-09-13 00:38:05.024158 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-09-13 00:38:05.024168 | orchestrator | 2025-09-13 00:38:05.024179 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-13 00:38:05.024190 | orchestrator | Saturday 13 September 2025 00:37:48 +0000 (0:00:02.500) 0:00:03.415 **** 2025-09-13 00:38:05.024201 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:38:05.024212 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:38:05.024223 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:38:05.024233 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:38:05.024244 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:38:05.024255 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:38:05.024266 | orchestrator | 2025-09-13 00:38:05.024278 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-09-13 00:38:05.024289 | orchestrator | 2025-09-13 00:38:05.024300 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-09-13 00:38:05.024311 | orchestrator | Saturday 13 September 2025 00:37:49 +0000 (0:00:01.823) 0:00:05.238 **** 2025-09-13 00:38:05.024323 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-13 00:38:05.024334 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-13 00:38:05.024365 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-13 00:38:05.024377 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-13 00:38:05.024391 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-13 00:38:05.024403 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-13 00:38:05.024415 | orchestrator | 2025-09-13 00:38:05.024428 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-09-13 00:38:05.024440 | orchestrator | Saturday 13 September 2025 00:37:51 +0000 (0:00:01.488) 0:00:06.727 **** 2025-09-13 00:38:05.024454 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:38:05.024467 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:38:05.024480 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:38:05.024493 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:38:05.024530 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:38:05.024543 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:38:05.024555 | orchestrator | 2025-09-13 00:38:05.024568 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-09-13 00:38:05.024581 | orchestrator | Saturday 13 September 2025 00:37:55 +0000 (0:00:03.830) 0:00:10.557 **** 2025-09-13 00:38:05.024593 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:38:05.024606 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:38:05.024618 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:38:05.024630 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:38:05.024642 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:38:05.024656 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:38:05.024668 | orchestrator | 2025-09-13 00:38:05.024681 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-09-13 00:38:05.024693 | orchestrator | 2025-09-13 00:38:05.024706 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-09-13 00:38:05.024719 | orchestrator | Saturday 13 September 2025 00:37:55 +0000 (0:00:00.570) 0:00:11.128 **** 2025-09-13 00:38:05.024732 | orchestrator | changed: [testbed-manager] 2025-09-13 00:38:05.024744 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:38:05.024755 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:38:05.024765 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:38:05.024776 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:38:05.024787 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:38:05.024797 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:38:05.024808 | orchestrator | 2025-09-13 00:38:05.024819 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-09-13 00:38:05.024830 | orchestrator | Saturday 13 September 2025 00:37:57 +0000 (0:00:01.501) 0:00:12.629 **** 2025-09-13 00:38:05.024848 | orchestrator | changed: [testbed-manager] 2025-09-13 00:38:05.024859 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:38:05.024870 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:38:05.024881 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:38:05.024891 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:38:05.024902 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:38:05.024929 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:38:05.024941 | orchestrator | 2025-09-13 00:38:05.024952 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-09-13 00:38:05.024964 | orchestrator | Saturday 13 September 2025 00:37:58 +0000 (0:00:01.409) 0:00:14.039 **** 2025-09-13 00:38:05.024975 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:38:05.024985 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:38:05.024996 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:38:05.025007 | orchestrator | ok: [testbed-manager] 2025-09-13 00:38:05.025018 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:38:05.025036 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:38:05.025047 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:38:05.025058 | orchestrator | 2025-09-13 00:38:05.025069 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-09-13 00:38:05.025080 | orchestrator | Saturday 13 September 2025 00:38:00 +0000 (0:00:01.356) 0:00:15.396 **** 2025-09-13 00:38:05.025090 | orchestrator | changed: [testbed-manager] 2025-09-13 00:38:05.025101 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:38:05.025111 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:38:05.025122 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:38:05.025133 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:38:05.025143 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:38:05.025154 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:38:05.025165 | orchestrator | 2025-09-13 00:38:05.025176 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-09-13 00:38:05.025186 | orchestrator | Saturday 13 September 2025 00:38:01 +0000 (0:00:01.708) 0:00:17.104 **** 2025-09-13 00:38:05.025197 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:38:05.025207 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:38:05.025218 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:38:05.025229 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:38:05.025239 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:38:05.025250 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:38:05.025261 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:38:05.025271 | orchestrator | 2025-09-13 00:38:05.025282 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-09-13 00:38:05.025293 | orchestrator | 2025-09-13 00:38:05.025304 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-09-13 00:38:05.025314 | orchestrator | Saturday 13 September 2025 00:38:02 +0000 (0:00:00.625) 0:00:17.730 **** 2025-09-13 00:38:05.025325 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:38:05.025336 | orchestrator | ok: [testbed-manager] 2025-09-13 00:38:05.025347 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:38:05.025357 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:38:05.025368 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:38:05.025379 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:38:05.025389 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:38:05.025400 | orchestrator | 2025-09-13 00:38:05.025411 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:38:05.025423 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-13 00:38:05.025435 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-13 00:38:05.025446 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-13 00:38:05.025457 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-13 00:38:05.025468 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-13 00:38:05.025479 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-13 00:38:05.025490 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-13 00:38:05.025501 | orchestrator | 2025-09-13 00:38:05.025527 | orchestrator | 2025-09-13 00:38:05.025539 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:38:05.025550 | orchestrator | Saturday 13 September 2025 00:38:04 +0000 (0:00:02.567) 0:00:20.298 **** 2025-09-13 00:38:05.025567 | orchestrator | =============================================================================== 2025-09-13 00:38:05.025578 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.83s 2025-09-13 00:38:05.025589 | orchestrator | Install python3-docker -------------------------------------------------- 2.57s 2025-09-13 00:38:05.025599 | orchestrator | Apply netplan configuration --------------------------------------------- 2.50s 2025-09-13 00:38:05.025610 | orchestrator | Apply netplan configuration --------------------------------------------- 1.82s 2025-09-13 00:38:05.025621 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.71s 2025-09-13 00:38:05.025632 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.50s 2025-09-13 00:38:05.025642 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.49s 2025-09-13 00:38:05.025653 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.41s 2025-09-13 00:38:05.025669 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.36s 2025-09-13 00:38:05.025680 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.76s 2025-09-13 00:38:05.025691 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.63s 2025-09-13 00:38:05.025709 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.57s 2025-09-13 00:38:05.660549 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-09-13 00:38:17.649183 | orchestrator | 2025-09-13 00:38:17 | INFO  | Task 4795583d-664b-46e2-8510-e31aa2bc5511 (reboot) was prepared for execution. 2025-09-13 00:38:17.649293 | orchestrator | 2025-09-13 00:38:17 | INFO  | It takes a moment until task 4795583d-664b-46e2-8510-e31aa2bc5511 (reboot) has been started and output is visible here. 2025-09-13 00:38:27.487594 | orchestrator | 2025-09-13 00:38:27.487699 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-13 00:38:27.487714 | orchestrator | 2025-09-13 00:38:27.487725 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-13 00:38:27.487736 | orchestrator | Saturday 13 September 2025 00:38:21 +0000 (0:00:00.222) 0:00:00.222 **** 2025-09-13 00:38:27.487746 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:38:27.487756 | orchestrator | 2025-09-13 00:38:27.487766 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-13 00:38:27.487776 | orchestrator | Saturday 13 September 2025 00:38:21 +0000 (0:00:00.102) 0:00:00.325 **** 2025-09-13 00:38:27.487786 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:38:27.487795 | orchestrator | 2025-09-13 00:38:27.487805 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-13 00:38:27.487815 | orchestrator | Saturday 13 September 2025 00:38:22 +0000 (0:00:00.967) 0:00:01.293 **** 2025-09-13 00:38:27.487824 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:38:27.487834 | orchestrator | 2025-09-13 00:38:27.487844 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-13 00:38:27.487854 | orchestrator | 2025-09-13 00:38:27.487863 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-13 00:38:27.487873 | orchestrator | Saturday 13 September 2025 00:38:22 +0000 (0:00:00.096) 0:00:01.389 **** 2025-09-13 00:38:27.487883 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:38:27.487892 | orchestrator | 2025-09-13 00:38:27.487902 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-13 00:38:27.487911 | orchestrator | Saturday 13 September 2025 00:38:23 +0000 (0:00:00.097) 0:00:01.486 **** 2025-09-13 00:38:27.487921 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:38:27.487930 | orchestrator | 2025-09-13 00:38:27.487940 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-13 00:38:27.487950 | orchestrator | Saturday 13 September 2025 00:38:23 +0000 (0:00:00.636) 0:00:02.122 **** 2025-09-13 00:38:27.487960 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:38:27.487993 | orchestrator | 2025-09-13 00:38:27.488004 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-13 00:38:27.488013 | orchestrator | 2025-09-13 00:38:27.488023 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-13 00:38:27.488033 | orchestrator | Saturday 13 September 2025 00:38:23 +0000 (0:00:00.096) 0:00:02.219 **** 2025-09-13 00:38:27.488042 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:38:27.488052 | orchestrator | 2025-09-13 00:38:27.488062 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-13 00:38:27.488071 | orchestrator | Saturday 13 September 2025 00:38:23 +0000 (0:00:00.211) 0:00:02.430 **** 2025-09-13 00:38:27.488081 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:38:27.488090 | orchestrator | 2025-09-13 00:38:27.488102 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-13 00:38:27.488112 | orchestrator | Saturday 13 September 2025 00:38:24 +0000 (0:00:00.616) 0:00:03.046 **** 2025-09-13 00:38:27.488123 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:38:27.488134 | orchestrator | 2025-09-13 00:38:27.488145 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-13 00:38:27.488155 | orchestrator | 2025-09-13 00:38:27.488167 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-13 00:38:27.488178 | orchestrator | Saturday 13 September 2025 00:38:24 +0000 (0:00:00.102) 0:00:03.149 **** 2025-09-13 00:38:27.488189 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:38:27.488199 | orchestrator | 2025-09-13 00:38:27.488210 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-13 00:38:27.488221 | orchestrator | Saturday 13 September 2025 00:38:24 +0000 (0:00:00.086) 0:00:03.235 **** 2025-09-13 00:38:27.488232 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:38:27.488243 | orchestrator | 2025-09-13 00:38:27.488254 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-13 00:38:27.488265 | orchestrator | Saturday 13 September 2025 00:38:25 +0000 (0:00:00.647) 0:00:03.882 **** 2025-09-13 00:38:27.488276 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:38:27.488287 | orchestrator | 2025-09-13 00:38:27.488299 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-13 00:38:27.488309 | orchestrator | 2025-09-13 00:38:27.488321 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-13 00:38:27.488332 | orchestrator | Saturday 13 September 2025 00:38:25 +0000 (0:00:00.106) 0:00:03.989 **** 2025-09-13 00:38:27.488342 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:38:27.488353 | orchestrator | 2025-09-13 00:38:27.488364 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-13 00:38:27.488375 | orchestrator | Saturday 13 September 2025 00:38:25 +0000 (0:00:00.095) 0:00:04.084 **** 2025-09-13 00:38:27.488386 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:38:27.488397 | orchestrator | 2025-09-13 00:38:27.488407 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-13 00:38:27.488418 | orchestrator | Saturday 13 September 2025 00:38:26 +0000 (0:00:00.657) 0:00:04.742 **** 2025-09-13 00:38:27.488429 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:38:27.488440 | orchestrator | 2025-09-13 00:38:27.488452 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-13 00:38:27.488463 | orchestrator | 2025-09-13 00:38:27.488473 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-13 00:38:27.488482 | orchestrator | Saturday 13 September 2025 00:38:26 +0000 (0:00:00.099) 0:00:04.841 **** 2025-09-13 00:38:27.488510 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:38:27.488520 | orchestrator | 2025-09-13 00:38:27.488530 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-13 00:38:27.488540 | orchestrator | Saturday 13 September 2025 00:38:26 +0000 (0:00:00.088) 0:00:04.930 **** 2025-09-13 00:38:27.488549 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:38:27.488558 | orchestrator | 2025-09-13 00:38:27.488568 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-13 00:38:27.488586 | orchestrator | Saturday 13 September 2025 00:38:27 +0000 (0:00:00.653) 0:00:05.583 **** 2025-09-13 00:38:27.488612 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:38:27.488622 | orchestrator | 2025-09-13 00:38:27.488631 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:38:27.488642 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-13 00:38:27.488653 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-13 00:38:27.488662 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-13 00:38:27.488672 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-13 00:38:27.488682 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-13 00:38:27.488691 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-13 00:38:27.488701 | orchestrator | 2025-09-13 00:38:27.488710 | orchestrator | 2025-09-13 00:38:27.488720 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:38:27.488730 | orchestrator | Saturday 13 September 2025 00:38:27 +0000 (0:00:00.036) 0:00:05.620 **** 2025-09-13 00:38:27.488739 | orchestrator | =============================================================================== 2025-09-13 00:38:27.488749 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.18s 2025-09-13 00:38:27.488762 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.68s 2025-09-13 00:38:27.488772 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.54s 2025-09-13 00:38:27.793549 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-09-13 00:38:39.912731 | orchestrator | 2025-09-13 00:38:39 | INFO  | Task 64e219ae-7b72-47d1-9f92-9355a6cd9e72 (wait-for-connection) was prepared for execution. 2025-09-13 00:38:39.912834 | orchestrator | 2025-09-13 00:38:39 | INFO  | It takes a moment until task 64e219ae-7b72-47d1-9f92-9355a6cd9e72 (wait-for-connection) has been started and output is visible here. 2025-09-13 00:38:55.235821 | orchestrator | 2025-09-13 00:38:55.235938 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-09-13 00:38:55.235956 | orchestrator | 2025-09-13 00:38:55.235968 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-09-13 00:38:55.235980 | orchestrator | Saturday 13 September 2025 00:38:43 +0000 (0:00:00.214) 0:00:00.214 **** 2025-09-13 00:38:55.235991 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:38:55.236003 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:38:55.236014 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:38:55.236025 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:38:55.236035 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:38:55.236046 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:38:55.236057 | orchestrator | 2025-09-13 00:38:55.236068 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:38:55.236079 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:38:55.236092 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:38:55.236103 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:38:55.236142 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:38:55.236171 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:38:55.236183 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:38:55.236194 | orchestrator | 2025-09-13 00:38:55.236205 | orchestrator | 2025-09-13 00:38:55.236216 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:38:55.236233 | orchestrator | Saturday 13 September 2025 00:38:55 +0000 (0:00:11.491) 0:00:11.706 **** 2025-09-13 00:38:55.236244 | orchestrator | =============================================================================== 2025-09-13 00:38:55.236255 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.49s 2025-09-13 00:38:55.430342 | orchestrator | + osism apply hddtemp 2025-09-13 00:39:07.268211 | orchestrator | 2025-09-13 00:39:07 | INFO  | Task 4451e767-28c2-42ac-b1e0-69c5280a8f26 (hddtemp) was prepared for execution. 2025-09-13 00:39:07.268315 | orchestrator | 2025-09-13 00:39:07 | INFO  | It takes a moment until task 4451e767-28c2-42ac-b1e0-69c5280a8f26 (hddtemp) has been started and output is visible here. 2025-09-13 00:39:35.086195 | orchestrator | 2025-09-13 00:39:35.086297 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-09-13 00:39:35.086310 | orchestrator | 2025-09-13 00:39:35.086321 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-09-13 00:39:35.086331 | orchestrator | Saturday 13 September 2025 00:39:11 +0000 (0:00:00.271) 0:00:00.271 **** 2025-09-13 00:39:35.086342 | orchestrator | ok: [testbed-manager] 2025-09-13 00:39:35.086353 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:39:35.086363 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:39:35.086372 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:39:35.086382 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:39:35.086391 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:39:35.086401 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:39:35.086411 | orchestrator | 2025-09-13 00:39:35.086420 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-09-13 00:39:35.086430 | orchestrator | Saturday 13 September 2025 00:39:11 +0000 (0:00:00.706) 0:00:00.977 **** 2025-09-13 00:39:35.086442 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:39:35.086455 | orchestrator | 2025-09-13 00:39:35.086465 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-09-13 00:39:35.086474 | orchestrator | Saturday 13 September 2025 00:39:12 +0000 (0:00:01.187) 0:00:02.165 **** 2025-09-13 00:39:35.086484 | orchestrator | ok: [testbed-manager] 2025-09-13 00:39:35.086494 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:39:35.086553 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:39:35.086563 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:39:35.086573 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:39:35.086582 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:39:35.086592 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:39:35.086601 | orchestrator | 2025-09-13 00:39:35.086611 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-09-13 00:39:35.086621 | orchestrator | Saturday 13 September 2025 00:39:14 +0000 (0:00:01.906) 0:00:04.071 **** 2025-09-13 00:39:35.086631 | orchestrator | changed: [testbed-manager] 2025-09-13 00:39:35.086641 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:39:35.086651 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:39:35.086660 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:39:35.086670 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:39:35.086703 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:39:35.086714 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:39:35.086723 | orchestrator | 2025-09-13 00:39:35.086733 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-09-13 00:39:35.086743 | orchestrator | Saturday 13 September 2025 00:39:16 +0000 (0:00:01.119) 0:00:05.191 **** 2025-09-13 00:39:35.086753 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:39:35.086762 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:39:35.086772 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:39:35.086781 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:39:35.086791 | orchestrator | ok: [testbed-manager] 2025-09-13 00:39:35.086800 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:39:35.086809 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:39:35.086819 | orchestrator | 2025-09-13 00:39:35.086829 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-09-13 00:39:35.086838 | orchestrator | Saturday 13 September 2025 00:39:17 +0000 (0:00:01.254) 0:00:06.445 **** 2025-09-13 00:39:35.086848 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:39:35.086857 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:39:35.086867 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:39:35.086877 | orchestrator | changed: [testbed-manager] 2025-09-13 00:39:35.086887 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:39:35.086896 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:39:35.086906 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:39:35.086915 | orchestrator | 2025-09-13 00:39:35.086925 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-09-13 00:39:35.086934 | orchestrator | Saturday 13 September 2025 00:39:18 +0000 (0:00:00.818) 0:00:07.264 **** 2025-09-13 00:39:35.086944 | orchestrator | changed: [testbed-manager] 2025-09-13 00:39:35.086953 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:39:35.086963 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:39:35.086972 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:39:35.086981 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:39:35.086991 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:39:35.087000 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:39:35.087010 | orchestrator | 2025-09-13 00:39:35.087019 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-09-13 00:39:35.087029 | orchestrator | Saturday 13 September 2025 00:39:30 +0000 (0:00:12.570) 0:00:19.834 **** 2025-09-13 00:39:35.087039 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:39:35.087049 | orchestrator | 2025-09-13 00:39:35.087059 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-09-13 00:39:35.087068 | orchestrator | Saturday 13 September 2025 00:39:32 +0000 (0:00:01.356) 0:00:21.191 **** 2025-09-13 00:39:35.087078 | orchestrator | changed: [testbed-manager] 2025-09-13 00:39:35.087100 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:39:35.087110 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:39:35.087119 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:39:35.087128 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:39:35.087138 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:39:35.087147 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:39:35.087157 | orchestrator | 2025-09-13 00:39:35.087166 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:39:35.087176 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:39:35.087204 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-13 00:39:35.087216 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-13 00:39:35.087233 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-13 00:39:35.087243 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-13 00:39:35.087253 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-13 00:39:35.087263 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-13 00:39:35.087272 | orchestrator | 2025-09-13 00:39:35.087282 | orchestrator | 2025-09-13 00:39:35.087292 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:39:35.087301 | orchestrator | Saturday 13 September 2025 00:39:34 +0000 (0:00:02.719) 0:00:23.911 **** 2025-09-13 00:39:35.087311 | orchestrator | =============================================================================== 2025-09-13 00:39:35.087320 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.57s 2025-09-13 00:39:35.087330 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.72s 2025-09-13 00:39:35.087339 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.91s 2025-09-13 00:39:35.087349 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.36s 2025-09-13 00:39:35.087359 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.25s 2025-09-13 00:39:35.087368 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.19s 2025-09-13 00:39:35.087378 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.12s 2025-09-13 00:39:35.087387 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.82s 2025-09-13 00:39:35.087397 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.71s 2025-09-13 00:39:35.389763 | orchestrator | ++ semver latest 7.1.1 2025-09-13 00:39:35.446413 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-13 00:39:35.446459 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-13 00:39:35.446472 | orchestrator | + sudo systemctl restart manager.service 2025-09-13 00:39:48.910519 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-13 00:39:48.910638 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-13 00:39:48.910646 | orchestrator | + local max_attempts=60 2025-09-13 00:39:48.910651 | orchestrator | + local name=ceph-ansible 2025-09-13 00:39:48.910655 | orchestrator | + local attempt_num=1 2025-09-13 00:39:48.910666 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-13 00:39:48.946763 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-13 00:39:48.946794 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-13 00:39:48.946799 | orchestrator | + sleep 5 2025-09-13 00:39:53.950801 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-13 00:39:53.976932 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-13 00:39:53.976976 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-13 00:39:53.976989 | orchestrator | + sleep 5 2025-09-13 00:39:58.980733 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-13 00:39:59.019919 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-13 00:39:59.019991 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-13 00:39:59.020005 | orchestrator | + sleep 5 2025-09-13 00:40:04.024960 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-13 00:40:04.061326 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-13 00:40:04.061409 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-13 00:40:04.061423 | orchestrator | + sleep 5 2025-09-13 00:40:09.066116 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-13 00:40:09.108665 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-13 00:40:09.108704 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-13 00:40:09.108734 | orchestrator | + sleep 5 2025-09-13 00:40:14.114383 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-13 00:40:14.154737 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-13 00:40:14.154833 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-13 00:40:14.154849 | orchestrator | + sleep 5 2025-09-13 00:40:19.159429 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-13 00:40:19.199397 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-13 00:40:19.199438 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-13 00:40:19.199449 | orchestrator | + sleep 5 2025-09-13 00:40:24.205844 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-13 00:40:24.236167 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-13 00:40:24.236273 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-13 00:40:24.236303 | orchestrator | + sleep 5 2025-09-13 00:40:29.239469 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-13 00:40:29.273148 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-13 00:40:29.273203 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-13 00:40:29.273214 | orchestrator | + sleep 5 2025-09-13 00:40:34.276876 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-13 00:40:34.313881 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-13 00:40:34.313939 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-13 00:40:34.313953 | orchestrator | + sleep 5 2025-09-13 00:40:39.318928 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-13 00:40:39.360130 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-13 00:40:39.360203 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-13 00:40:39.360218 | orchestrator | + sleep 5 2025-09-13 00:40:44.365149 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-13 00:40:44.403301 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-13 00:40:44.403370 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-13 00:40:44.403383 | orchestrator | + sleep 5 2025-09-13 00:40:49.408913 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-13 00:40:49.440602 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-13 00:40:49.440660 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-13 00:40:49.440674 | orchestrator | + sleep 5 2025-09-13 00:40:54.445089 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-13 00:40:54.483421 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-13 00:40:54.483483 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-13 00:40:54.483497 | orchestrator | + local max_attempts=60 2025-09-13 00:40:54.483511 | orchestrator | + local name=kolla-ansible 2025-09-13 00:40:54.483522 | orchestrator | + local attempt_num=1 2025-09-13 00:40:54.484375 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-13 00:40:54.517916 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-13 00:40:54.517979 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-13 00:40:54.517987 | orchestrator | + local max_attempts=60 2025-09-13 00:40:54.517994 | orchestrator | + local name=osism-ansible 2025-09-13 00:40:54.518000 | orchestrator | + local attempt_num=1 2025-09-13 00:40:54.518348 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-13 00:40:54.550747 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-13 00:40:54.550768 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-13 00:40:54.550778 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-13 00:40:54.717302 | orchestrator | ARA in ceph-ansible already disabled. 2025-09-13 00:40:54.866686 | orchestrator | ARA in kolla-ansible already disabled. 2025-09-13 00:40:55.025895 | orchestrator | ARA in osism-ansible already disabled. 2025-09-13 00:40:55.183113 | orchestrator | ARA in osism-kubernetes already disabled. 2025-09-13 00:40:55.184123 | orchestrator | + osism apply gather-facts 2025-09-13 00:41:07.255432 | orchestrator | 2025-09-13 00:41:07 | INFO  | Task ff6dff1e-d154-4b7c-8cab-2b837c90aa16 (gather-facts) was prepared for execution. 2025-09-13 00:41:07.255508 | orchestrator | 2025-09-13 00:41:07 | INFO  | It takes a moment until task ff6dff1e-d154-4b7c-8cab-2b837c90aa16 (gather-facts) has been started and output is visible here. 2025-09-13 00:41:20.380056 | orchestrator | 2025-09-13 00:41:20.380166 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-13 00:41:20.380208 | orchestrator | 2025-09-13 00:41:20.380221 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-13 00:41:20.380233 | orchestrator | Saturday 13 September 2025 00:41:11 +0000 (0:00:00.230) 0:00:00.230 **** 2025-09-13 00:41:20.380244 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:41:20.380256 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:41:20.380267 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:41:20.380278 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:41:20.380288 | orchestrator | ok: [testbed-manager] 2025-09-13 00:41:20.380299 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:41:20.380310 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:41:20.380320 | orchestrator | 2025-09-13 00:41:20.380331 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-13 00:41:20.380342 | orchestrator | 2025-09-13 00:41:20.380353 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-13 00:41:20.380364 | orchestrator | Saturday 13 September 2025 00:41:19 +0000 (0:00:08.342) 0:00:08.573 **** 2025-09-13 00:41:20.380375 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:41:20.380386 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:41:20.380397 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:41:20.380408 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:41:20.380419 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:41:20.380429 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:41:20.380440 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:41:20.380451 | orchestrator | 2025-09-13 00:41:20.380462 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:41:20.380473 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-13 00:41:20.380486 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-13 00:41:20.380497 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-13 00:41:20.380507 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-13 00:41:20.380518 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-13 00:41:20.380529 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-13 00:41:20.380540 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-13 00:41:20.380551 | orchestrator | 2025-09-13 00:41:20.380562 | orchestrator | 2025-09-13 00:41:20.380573 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:41:20.380584 | orchestrator | Saturday 13 September 2025 00:41:20 +0000 (0:00:00.485) 0:00:09.058 **** 2025-09-13 00:41:20.380610 | orchestrator | =============================================================================== 2025-09-13 00:41:20.380651 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.34s 2025-09-13 00:41:20.380662 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.49s 2025-09-13 00:41:20.595277 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-09-13 00:41:20.612039 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-09-13 00:41:20.623273 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-09-13 00:41:20.632675 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-09-13 00:41:20.648166 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-09-13 00:41:20.663468 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-09-13 00:41:20.676343 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-09-13 00:41:20.690460 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-09-13 00:41:20.700275 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-09-13 00:41:20.710353 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-09-13 00:41:20.722219 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-09-13 00:41:20.738232 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-09-13 00:41:20.746830 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-09-13 00:41:20.756056 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-09-13 00:41:20.766573 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-09-13 00:41:20.786135 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-09-13 00:41:20.803265 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-09-13 00:41:20.823049 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-09-13 00:41:20.835579 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-09-13 00:41:20.855358 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-09-13 00:41:20.872422 | orchestrator | + [[ false == \t\r\u\e ]] 2025-09-13 00:41:21.265248 | orchestrator | ok: Runtime: 0:23:35.363510 2025-09-13 00:41:21.372096 | 2025-09-13 00:41:21.372269 | TASK [Deploy services] 2025-09-13 00:41:21.904449 | orchestrator | skipping: Conditional result was False 2025-09-13 00:41:21.922281 | 2025-09-13 00:41:21.922435 | TASK [Deploy in a nutshell] 2025-09-13 00:41:22.590849 | orchestrator | + set -e 2025-09-13 00:41:22.592309 | orchestrator | 2025-09-13 00:41:22.592331 | orchestrator | # PULL IMAGES 2025-09-13 00:41:22.592341 | orchestrator | 2025-09-13 00:41:22.592353 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-13 00:41:22.592368 | orchestrator | ++ export INTERACTIVE=false 2025-09-13 00:41:22.592378 | orchestrator | ++ INTERACTIVE=false 2025-09-13 00:41:22.592409 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-13 00:41:22.592423 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-13 00:41:22.592433 | orchestrator | + source /opt/manager-vars.sh 2025-09-13 00:41:22.592441 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-13 00:41:22.592453 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-13 00:41:22.592461 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-13 00:41:22.592473 | orchestrator | ++ CEPH_VERSION=reef 2025-09-13 00:41:22.592480 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-13 00:41:22.592493 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-13 00:41:22.592500 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-13 00:41:22.592510 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-13 00:41:22.592518 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-13 00:41:22.592526 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-13 00:41:22.592533 | orchestrator | ++ export ARA=false 2025-09-13 00:41:22.592540 | orchestrator | ++ ARA=false 2025-09-13 00:41:22.592548 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-13 00:41:22.592555 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-13 00:41:22.592562 | orchestrator | ++ export TEMPEST=true 2025-09-13 00:41:22.592569 | orchestrator | ++ TEMPEST=true 2025-09-13 00:41:22.592576 | orchestrator | ++ export IS_ZUUL=true 2025-09-13 00:41:22.592583 | orchestrator | ++ IS_ZUUL=true 2025-09-13 00:41:22.592591 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.209 2025-09-13 00:41:22.592598 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.209 2025-09-13 00:41:22.592651 | orchestrator | ++ export EXTERNAL_API=false 2025-09-13 00:41:22.592659 | orchestrator | ++ EXTERNAL_API=false 2025-09-13 00:41:22.592666 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-13 00:41:22.592673 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-13 00:41:22.592681 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-13 00:41:22.592688 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-13 00:41:22.592695 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-13 00:41:22.592703 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-13 00:41:22.592710 | orchestrator | + echo 2025-09-13 00:41:22.592717 | orchestrator | + echo '# PULL IMAGES' 2025-09-13 00:41:22.592724 | orchestrator | + echo 2025-09-13 00:41:22.592741 | orchestrator | ++ semver latest 7.0.0 2025-09-13 00:41:22.652563 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-13 00:41:22.652641 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-13 00:41:22.652655 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-09-13 00:41:24.286378 | orchestrator | 2025-09-13 00:41:24 | INFO  | Trying to run play pull-images in environment custom 2025-09-13 00:41:34.447314 | orchestrator | 2025-09-13 00:41:34 | INFO  | Task f8b7f53e-6d92-4b72-98ef-3ef4e8cc413f (pull-images) was prepared for execution. 2025-09-13 00:41:34.447433 | orchestrator | 2025-09-13 00:41:34 | INFO  | Task f8b7f53e-6d92-4b72-98ef-3ef4e8cc413f is running in background. No more output. Check ARA for logs. 2025-09-13 00:41:36.430318 | orchestrator | 2025-09-13 00:41:36 | INFO  | Trying to run play wipe-partitions in environment custom 2025-09-13 00:41:46.623419 | orchestrator | 2025-09-13 00:41:46 | INFO  | Task e444ccc8-ddb8-4dce-bdc3-6dc6e620a103 (wipe-partitions) was prepared for execution. 2025-09-13 00:41:46.623543 | orchestrator | 2025-09-13 00:41:46 | INFO  | It takes a moment until task e444ccc8-ddb8-4dce-bdc3-6dc6e620a103 (wipe-partitions) has been started and output is visible here. 2025-09-13 00:41:59.001617 | orchestrator | 2025-09-13 00:41:59.001751 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-09-13 00:41:59.001769 | orchestrator | 2025-09-13 00:41:59.001781 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-09-13 00:41:59.001798 | orchestrator | Saturday 13 September 2025 00:41:51 +0000 (0:00:00.134) 0:00:00.134 **** 2025-09-13 00:41:59.001810 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:41:59.001822 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:41:59.001833 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:41:59.001845 | orchestrator | 2025-09-13 00:41:59.001856 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-09-13 00:41:59.001893 | orchestrator | Saturday 13 September 2025 00:41:51 +0000 (0:00:00.543) 0:00:00.678 **** 2025-09-13 00:41:59.001905 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:41:59.001916 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:41:59.001931 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:41:59.001942 | orchestrator | 2025-09-13 00:41:59.001954 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-09-13 00:41:59.001965 | orchestrator | Saturday 13 September 2025 00:41:51 +0000 (0:00:00.255) 0:00:00.933 **** 2025-09-13 00:41:59.001976 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:41:59.001988 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:41:59.001999 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:41:59.002009 | orchestrator | 2025-09-13 00:41:59.002076 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-09-13 00:41:59.002089 | orchestrator | Saturday 13 September 2025 00:41:52 +0000 (0:00:00.688) 0:00:01.621 **** 2025-09-13 00:41:59.002100 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:41:59.002111 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:41:59.002122 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:41:59.002133 | orchestrator | 2025-09-13 00:41:59.002147 | orchestrator | TASK [Check device availability] *********************************************** 2025-09-13 00:41:59.002159 | orchestrator | Saturday 13 September 2025 00:41:52 +0000 (0:00:00.258) 0:00:01.880 **** 2025-09-13 00:41:59.002172 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-13 00:41:59.002188 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-13 00:41:59.002202 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-13 00:41:59.002214 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-13 00:41:59.002227 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-13 00:41:59.002239 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-13 00:41:59.002253 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-13 00:41:59.002266 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-13 00:41:59.002278 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-13 00:41:59.002291 | orchestrator | 2025-09-13 00:41:59.002303 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-09-13 00:41:59.002317 | orchestrator | Saturday 13 September 2025 00:41:54 +0000 (0:00:01.162) 0:00:03.043 **** 2025-09-13 00:41:59.002331 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-09-13 00:41:59.002343 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-09-13 00:41:59.002355 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-09-13 00:41:59.002368 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-09-13 00:41:59.002380 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-09-13 00:41:59.002394 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-09-13 00:41:59.002407 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-09-13 00:41:59.002418 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-09-13 00:41:59.002431 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-09-13 00:41:59.002443 | orchestrator | 2025-09-13 00:41:59.002456 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-09-13 00:41:59.002469 | orchestrator | Saturday 13 September 2025 00:41:55 +0000 (0:00:01.286) 0:00:04.329 **** 2025-09-13 00:41:59.002482 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-13 00:41:59.002494 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-13 00:41:59.002505 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-13 00:41:59.002516 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-13 00:41:59.002526 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-13 00:41:59.002537 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-13 00:41:59.002548 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-13 00:41:59.002567 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-13 00:41:59.002585 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-13 00:41:59.002597 | orchestrator | 2025-09-13 00:41:59.002608 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-09-13 00:41:59.002619 | orchestrator | Saturday 13 September 2025 00:41:57 +0000 (0:00:02.131) 0:00:06.460 **** 2025-09-13 00:41:59.002630 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:41:59.002641 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:41:59.002652 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:41:59.002662 | orchestrator | 2025-09-13 00:41:59.002694 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-09-13 00:41:59.002705 | orchestrator | Saturday 13 September 2025 00:41:58 +0000 (0:00:00.557) 0:00:07.018 **** 2025-09-13 00:41:59.002716 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:41:59.002727 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:41:59.002738 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:41:59.002749 | orchestrator | 2025-09-13 00:41:59.002760 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:41:59.002773 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-13 00:41:59.002786 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-13 00:41:59.002815 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-13 00:41:59.002827 | orchestrator | 2025-09-13 00:41:59.002838 | orchestrator | 2025-09-13 00:41:59.002849 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:41:59.002860 | orchestrator | Saturday 13 September 2025 00:41:58 +0000 (0:00:00.620) 0:00:07.638 **** 2025-09-13 00:41:59.002871 | orchestrator | =============================================================================== 2025-09-13 00:41:59.002882 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.13s 2025-09-13 00:41:59.002892 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.29s 2025-09-13 00:41:59.002903 | orchestrator | Check device availability ----------------------------------------------- 1.16s 2025-09-13 00:41:59.002914 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.69s 2025-09-13 00:41:59.002925 | orchestrator | Request device events from the kernel ----------------------------------- 0.62s 2025-09-13 00:41:59.002936 | orchestrator | Reload udev rules ------------------------------------------------------- 0.56s 2025-09-13 00:41:59.002947 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.54s 2025-09-13 00:41:59.002957 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.26s 2025-09-13 00:41:59.002968 | orchestrator | Remove all rook related logical devices --------------------------------- 0.26s 2025-09-13 00:42:11.334637 | orchestrator | 2025-09-13 00:42:11 | INFO  | Task 87234397-c2ae-4779-98a7-5724408e7acf (facts) was prepared for execution. 2025-09-13 00:42:11.334799 | orchestrator | 2025-09-13 00:42:11 | INFO  | It takes a moment until task 87234397-c2ae-4779-98a7-5724408e7acf (facts) has been started and output is visible here. 2025-09-13 00:42:22.476174 | orchestrator | 2025-09-13 00:42:22.476291 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-13 00:42:22.476309 | orchestrator | 2025-09-13 00:42:22.476321 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-13 00:42:22.476333 | orchestrator | Saturday 13 September 2025 00:42:14 +0000 (0:00:00.244) 0:00:00.244 **** 2025-09-13 00:42:22.476345 | orchestrator | ok: [testbed-manager] 2025-09-13 00:42:22.476357 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:42:22.476368 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:42:22.476403 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:42:22.476415 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:42:22.476426 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:42:22.476436 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:42:22.476447 | orchestrator | 2025-09-13 00:42:22.476458 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-13 00:42:22.476469 | orchestrator | Saturday 13 September 2025 00:42:15 +0000 (0:00:01.006) 0:00:01.251 **** 2025-09-13 00:42:22.476480 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:42:22.476491 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:42:22.476502 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:42:22.476513 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:42:22.476524 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:42:22.476535 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:42:22.476546 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:42:22.476556 | orchestrator | 2025-09-13 00:42:22.476567 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-13 00:42:22.476578 | orchestrator | 2025-09-13 00:42:22.476602 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-13 00:42:22.476613 | orchestrator | Saturday 13 September 2025 00:42:17 +0000 (0:00:01.121) 0:00:02.372 **** 2025-09-13 00:42:22.476624 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:42:22.476635 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:42:22.476647 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:42:22.476658 | orchestrator | ok: [testbed-manager] 2025-09-13 00:42:22.476669 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:42:22.476679 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:42:22.476690 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:42:22.476751 | orchestrator | 2025-09-13 00:42:22.476765 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-13 00:42:22.476776 | orchestrator | 2025-09-13 00:42:22.476789 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-13 00:42:22.476802 | orchestrator | Saturday 13 September 2025 00:42:21 +0000 (0:00:04.535) 0:00:06.908 **** 2025-09-13 00:42:22.476814 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:42:22.476826 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:42:22.476839 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:42:22.476851 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:42:22.476862 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:42:22.476874 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:42:22.476886 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:42:22.476899 | orchestrator | 2025-09-13 00:42:22.476911 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:42:22.476924 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-13 00:42:22.476938 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-13 00:42:22.476951 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-13 00:42:22.476963 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-13 00:42:22.476975 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-13 00:42:22.476988 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-13 00:42:22.477001 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-13 00:42:22.477013 | orchestrator | 2025-09-13 00:42:22.477036 | orchestrator | 2025-09-13 00:42:22.477049 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:42:22.477062 | orchestrator | Saturday 13 September 2025 00:42:22 +0000 (0:00:00.619) 0:00:07.528 **** 2025-09-13 00:42:22.477075 | orchestrator | =============================================================================== 2025-09-13 00:42:22.477085 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.54s 2025-09-13 00:42:22.477096 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.12s 2025-09-13 00:42:22.477107 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.01s 2025-09-13 00:42:22.477118 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.62s 2025-09-13 00:42:24.396241 | orchestrator | 2025-09-13 00:42:24 | INFO  | Task 8700ffe4-edfd-407a-9f0d-744ad60de62b (ceph-configure-lvm-volumes) was prepared for execution. 2025-09-13 00:42:24.396342 | orchestrator | 2025-09-13 00:42:24 | INFO  | It takes a moment until task 8700ffe4-edfd-407a-9f0d-744ad60de62b (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-09-13 00:42:35.691515 | orchestrator | 2025-09-13 00:42:35.691617 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-13 00:42:35.691633 | orchestrator | 2025-09-13 00:42:35.691646 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-13 00:42:35.691657 | orchestrator | Saturday 13 September 2025 00:42:28 +0000 (0:00:00.318) 0:00:00.318 **** 2025-09-13 00:42:35.691669 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-13 00:42:35.691680 | orchestrator | 2025-09-13 00:42:35.691692 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-13 00:42:35.691702 | orchestrator | Saturday 13 September 2025 00:42:28 +0000 (0:00:00.237) 0:00:00.556 **** 2025-09-13 00:42:35.691743 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:42:35.691757 | orchestrator | 2025-09-13 00:42:35.691768 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:42:35.691779 | orchestrator | Saturday 13 September 2025 00:42:28 +0000 (0:00:00.190) 0:00:00.747 **** 2025-09-13 00:42:35.691789 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-13 00:42:35.691801 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-13 00:42:35.691812 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-13 00:42:35.691834 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-13 00:42:35.691846 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-13 00:42:35.691856 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-13 00:42:35.691867 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-13 00:42:35.691878 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-13 00:42:35.691889 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-13 00:42:35.691900 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-13 00:42:35.691911 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-13 00:42:35.691922 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-13 00:42:35.691932 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-13 00:42:35.691943 | orchestrator | 2025-09-13 00:42:35.691954 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:42:35.691965 | orchestrator | Saturday 13 September 2025 00:42:29 +0000 (0:00:00.328) 0:00:01.075 **** 2025-09-13 00:42:35.691976 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:42:35.692007 | orchestrator | 2025-09-13 00:42:35.692019 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:42:35.692030 | orchestrator | Saturday 13 September 2025 00:42:29 +0000 (0:00:00.439) 0:00:01.515 **** 2025-09-13 00:42:35.692042 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:42:35.692055 | orchestrator | 2025-09-13 00:42:35.692067 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:42:35.692079 | orchestrator | Saturday 13 September 2025 00:42:29 +0000 (0:00:00.225) 0:00:01.741 **** 2025-09-13 00:42:35.692091 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:42:35.692102 | orchestrator | 2025-09-13 00:42:35.692115 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:42:35.692127 | orchestrator | Saturday 13 September 2025 00:42:29 +0000 (0:00:00.182) 0:00:01.923 **** 2025-09-13 00:42:35.692139 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:42:35.692155 | orchestrator | 2025-09-13 00:42:35.692168 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:42:35.692180 | orchestrator | Saturday 13 September 2025 00:42:30 +0000 (0:00:00.204) 0:00:02.128 **** 2025-09-13 00:42:35.692192 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:42:35.692204 | orchestrator | 2025-09-13 00:42:35.692217 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:42:35.692229 | orchestrator | Saturday 13 September 2025 00:42:30 +0000 (0:00:00.195) 0:00:02.323 **** 2025-09-13 00:42:35.692241 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:42:35.692253 | orchestrator | 2025-09-13 00:42:35.692265 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:42:35.692278 | orchestrator | Saturday 13 September 2025 00:42:30 +0000 (0:00:00.192) 0:00:02.515 **** 2025-09-13 00:42:35.692290 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:42:35.692302 | orchestrator | 2025-09-13 00:42:35.692314 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:42:35.692325 | orchestrator | Saturday 13 September 2025 00:42:30 +0000 (0:00:00.203) 0:00:02.719 **** 2025-09-13 00:42:35.692336 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:42:35.692346 | orchestrator | 2025-09-13 00:42:35.692357 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:42:35.692368 | orchestrator | Saturday 13 September 2025 00:42:30 +0000 (0:00:00.211) 0:00:02.931 **** 2025-09-13 00:42:35.692378 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d) 2025-09-13 00:42:35.692391 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d) 2025-09-13 00:42:35.692401 | orchestrator | 2025-09-13 00:42:35.692412 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:42:35.692423 | orchestrator | Saturday 13 September 2025 00:42:31 +0000 (0:00:00.416) 0:00:03.347 **** 2025-09-13 00:42:35.692453 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6e724704-b413-40a8-af93-f723a1c0b62f) 2025-09-13 00:42:35.692464 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6e724704-b413-40a8-af93-f723a1c0b62f) 2025-09-13 00:42:35.692475 | orchestrator | 2025-09-13 00:42:35.692486 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:42:35.692497 | orchestrator | Saturday 13 September 2025 00:42:31 +0000 (0:00:00.391) 0:00:03.738 **** 2025-09-13 00:42:35.692513 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e25c372e-2cb9-47f6-a0c5-1defd25ac71c) 2025-09-13 00:42:35.692524 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e25c372e-2cb9-47f6-a0c5-1defd25ac71c) 2025-09-13 00:42:35.692535 | orchestrator | 2025-09-13 00:42:35.692546 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:42:35.692556 | orchestrator | Saturday 13 September 2025 00:42:32 +0000 (0:00:00.610) 0:00:04.348 **** 2025-09-13 00:42:35.692567 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0c46d17e-adbc-49dd-8bd7-8befc745e964) 2025-09-13 00:42:35.692585 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0c46d17e-adbc-49dd-8bd7-8befc745e964) 2025-09-13 00:42:35.692596 | orchestrator | 2025-09-13 00:42:35.692606 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:42:35.692617 | orchestrator | Saturday 13 September 2025 00:42:32 +0000 (0:00:00.622) 0:00:04.971 **** 2025-09-13 00:42:35.692628 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-13 00:42:35.692639 | orchestrator | 2025-09-13 00:42:35.692649 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:42:35.692660 | orchestrator | Saturday 13 September 2025 00:42:33 +0000 (0:00:00.734) 0:00:05.706 **** 2025-09-13 00:42:35.692671 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-13 00:42:35.692681 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-13 00:42:35.692692 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-13 00:42:35.692703 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-13 00:42:35.692729 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-13 00:42:35.692741 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-13 00:42:35.692752 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-13 00:42:35.692762 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-13 00:42:35.692773 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-13 00:42:35.692783 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-13 00:42:35.692794 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-13 00:42:35.692805 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-13 00:42:35.692815 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-13 00:42:35.692826 | orchestrator | 2025-09-13 00:42:35.692837 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:42:35.692847 | orchestrator | Saturday 13 September 2025 00:42:34 +0000 (0:00:00.419) 0:00:06.125 **** 2025-09-13 00:42:35.692858 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:42:35.692869 | orchestrator | 2025-09-13 00:42:35.692880 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:42:35.692891 | orchestrator | Saturday 13 September 2025 00:42:34 +0000 (0:00:00.198) 0:00:06.324 **** 2025-09-13 00:42:35.692901 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:42:35.692912 | orchestrator | 2025-09-13 00:42:35.692922 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:42:35.692933 | orchestrator | Saturday 13 September 2025 00:42:34 +0000 (0:00:00.201) 0:00:06.526 **** 2025-09-13 00:42:35.692944 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:42:35.692955 | orchestrator | 2025-09-13 00:42:35.692966 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:42:35.692976 | orchestrator | Saturday 13 September 2025 00:42:34 +0000 (0:00:00.233) 0:00:06.759 **** 2025-09-13 00:42:35.692987 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:42:35.692998 | orchestrator | 2025-09-13 00:42:35.693009 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:42:35.693019 | orchestrator | Saturday 13 September 2025 00:42:34 +0000 (0:00:00.189) 0:00:06.948 **** 2025-09-13 00:42:35.693030 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:42:35.693041 | orchestrator | 2025-09-13 00:42:35.693058 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:42:35.693069 | orchestrator | Saturday 13 September 2025 00:42:35 +0000 (0:00:00.189) 0:00:07.138 **** 2025-09-13 00:42:35.693080 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:42:35.693091 | orchestrator | 2025-09-13 00:42:35.693101 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:42:35.693112 | orchestrator | Saturday 13 September 2025 00:42:35 +0000 (0:00:00.196) 0:00:07.335 **** 2025-09-13 00:42:35.693123 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:42:35.693133 | orchestrator | 2025-09-13 00:42:35.693144 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:42:35.693155 | orchestrator | Saturday 13 September 2025 00:42:35 +0000 (0:00:00.188) 0:00:07.523 **** 2025-09-13 00:42:35.693172 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:42:43.328312 | orchestrator | 2025-09-13 00:42:43.328425 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:42:43.328442 | orchestrator | Saturday 13 September 2025 00:42:35 +0000 (0:00:00.183) 0:00:07.707 **** 2025-09-13 00:42:43.328455 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-13 00:42:43.328467 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-13 00:42:43.328478 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-13 00:42:43.328490 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-13 00:42:43.328501 | orchestrator | 2025-09-13 00:42:43.328512 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:42:43.328523 | orchestrator | Saturday 13 September 2025 00:42:36 +0000 (0:00:00.972) 0:00:08.679 **** 2025-09-13 00:42:43.328554 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:42:43.328566 | orchestrator | 2025-09-13 00:42:43.328577 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:42:43.328587 | orchestrator | Saturday 13 September 2025 00:42:36 +0000 (0:00:00.198) 0:00:08.878 **** 2025-09-13 00:42:43.328598 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:42:43.328609 | orchestrator | 2025-09-13 00:42:43.328620 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:42:43.328631 | orchestrator | Saturday 13 September 2025 00:42:37 +0000 (0:00:00.204) 0:00:09.083 **** 2025-09-13 00:42:43.328642 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:42:43.328653 | orchestrator | 2025-09-13 00:42:43.328664 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:42:43.328675 | orchestrator | Saturday 13 September 2025 00:42:37 +0000 (0:00:00.197) 0:00:09.280 **** 2025-09-13 00:42:43.328686 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:42:43.328697 | orchestrator | 2025-09-13 00:42:43.328708 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-13 00:42:43.328719 | orchestrator | Saturday 13 September 2025 00:42:37 +0000 (0:00:00.185) 0:00:09.466 **** 2025-09-13 00:42:43.328778 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-09-13 00:42:43.328790 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-09-13 00:42:43.328801 | orchestrator | 2025-09-13 00:42:43.328812 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-13 00:42:43.328823 | orchestrator | Saturday 13 September 2025 00:42:37 +0000 (0:00:00.168) 0:00:09.634 **** 2025-09-13 00:42:43.328834 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:42:43.328845 | orchestrator | 2025-09-13 00:42:43.328857 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-13 00:42:43.328870 | orchestrator | Saturday 13 September 2025 00:42:37 +0000 (0:00:00.139) 0:00:09.773 **** 2025-09-13 00:42:43.328882 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:42:43.328894 | orchestrator | 2025-09-13 00:42:43.328907 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-13 00:42:43.328919 | orchestrator | Saturday 13 September 2025 00:42:37 +0000 (0:00:00.143) 0:00:09.917 **** 2025-09-13 00:42:43.328931 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:42:43.328969 | orchestrator | 2025-09-13 00:42:43.328982 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-13 00:42:43.328994 | orchestrator | Saturday 13 September 2025 00:42:38 +0000 (0:00:00.163) 0:00:10.081 **** 2025-09-13 00:42:43.329006 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:42:43.329018 | orchestrator | 2025-09-13 00:42:43.329030 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-13 00:42:43.329042 | orchestrator | Saturday 13 September 2025 00:42:38 +0000 (0:00:00.137) 0:00:10.219 **** 2025-09-13 00:42:43.329055 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '741132e6-4e77-5ad5-aab1-a12c98657a1e'}}) 2025-09-13 00:42:43.329067 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c9c3f5f4-a401-5886-82fa-33c7ca08590f'}}) 2025-09-13 00:42:43.329079 | orchestrator | 2025-09-13 00:42:43.329091 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-13 00:42:43.329103 | orchestrator | Saturday 13 September 2025 00:42:38 +0000 (0:00:00.163) 0:00:10.382 **** 2025-09-13 00:42:43.329116 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '741132e6-4e77-5ad5-aab1-a12c98657a1e'}})  2025-09-13 00:42:43.329136 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c9c3f5f4-a401-5886-82fa-33c7ca08590f'}})  2025-09-13 00:42:43.329148 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:42:43.329161 | orchestrator | 2025-09-13 00:42:43.329173 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-13 00:42:43.329187 | orchestrator | Saturday 13 September 2025 00:42:38 +0000 (0:00:00.177) 0:00:10.560 **** 2025-09-13 00:42:43.329199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '741132e6-4e77-5ad5-aab1-a12c98657a1e'}})  2025-09-13 00:42:43.329211 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c9c3f5f4-a401-5886-82fa-33c7ca08590f'}})  2025-09-13 00:42:43.329222 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:42:43.329232 | orchestrator | 2025-09-13 00:42:43.329243 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-13 00:42:43.329254 | orchestrator | Saturday 13 September 2025 00:42:38 +0000 (0:00:00.339) 0:00:10.900 **** 2025-09-13 00:42:43.329265 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '741132e6-4e77-5ad5-aab1-a12c98657a1e'}})  2025-09-13 00:42:43.329276 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c9c3f5f4-a401-5886-82fa-33c7ca08590f'}})  2025-09-13 00:42:43.329286 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:42:43.329297 | orchestrator | 2025-09-13 00:42:43.329327 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-13 00:42:43.329339 | orchestrator | Saturday 13 September 2025 00:42:39 +0000 (0:00:00.131) 0:00:11.031 **** 2025-09-13 00:42:43.329350 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:42:43.329360 | orchestrator | 2025-09-13 00:42:43.329371 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-13 00:42:43.329382 | orchestrator | Saturday 13 September 2025 00:42:39 +0000 (0:00:00.127) 0:00:11.158 **** 2025-09-13 00:42:43.329393 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:42:43.329404 | orchestrator | 2025-09-13 00:42:43.329415 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-13 00:42:43.329425 | orchestrator | Saturday 13 September 2025 00:42:39 +0000 (0:00:00.146) 0:00:11.305 **** 2025-09-13 00:42:43.329436 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:42:43.329447 | orchestrator | 2025-09-13 00:42:43.329458 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-13 00:42:43.329469 | orchestrator | Saturday 13 September 2025 00:42:39 +0000 (0:00:00.125) 0:00:11.430 **** 2025-09-13 00:42:43.329479 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:42:43.329490 | orchestrator | 2025-09-13 00:42:43.329509 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-13 00:42:43.329520 | orchestrator | Saturday 13 September 2025 00:42:39 +0000 (0:00:00.136) 0:00:11.566 **** 2025-09-13 00:42:43.329531 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:42:43.329542 | orchestrator | 2025-09-13 00:42:43.329553 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-13 00:42:43.329564 | orchestrator | Saturday 13 September 2025 00:42:39 +0000 (0:00:00.131) 0:00:11.698 **** 2025-09-13 00:42:43.329575 | orchestrator | ok: [testbed-node-3] => { 2025-09-13 00:42:43.329585 | orchestrator |  "ceph_osd_devices": { 2025-09-13 00:42:43.329597 | orchestrator |  "sdb": { 2025-09-13 00:42:43.329608 | orchestrator |  "osd_lvm_uuid": "741132e6-4e77-5ad5-aab1-a12c98657a1e" 2025-09-13 00:42:43.329619 | orchestrator |  }, 2025-09-13 00:42:43.329629 | orchestrator |  "sdc": { 2025-09-13 00:42:43.329640 | orchestrator |  "osd_lvm_uuid": "c9c3f5f4-a401-5886-82fa-33c7ca08590f" 2025-09-13 00:42:43.329651 | orchestrator |  } 2025-09-13 00:42:43.329662 | orchestrator |  } 2025-09-13 00:42:43.329673 | orchestrator | } 2025-09-13 00:42:43.329684 | orchestrator | 2025-09-13 00:42:43.329695 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-13 00:42:43.329706 | orchestrator | Saturday 13 September 2025 00:42:39 +0000 (0:00:00.154) 0:00:11.852 **** 2025-09-13 00:42:43.329717 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:42:43.329747 | orchestrator | 2025-09-13 00:42:43.329759 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-13 00:42:43.329770 | orchestrator | Saturday 13 September 2025 00:42:39 +0000 (0:00:00.147) 0:00:11.999 **** 2025-09-13 00:42:43.329786 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:42:43.329798 | orchestrator | 2025-09-13 00:42:43.329809 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-13 00:42:43.329819 | orchestrator | Saturday 13 September 2025 00:42:40 +0000 (0:00:00.139) 0:00:12.139 **** 2025-09-13 00:42:43.329830 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:42:43.329841 | orchestrator | 2025-09-13 00:42:43.329852 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-13 00:42:43.329863 | orchestrator | Saturday 13 September 2025 00:42:40 +0000 (0:00:00.125) 0:00:12.265 **** 2025-09-13 00:42:43.329873 | orchestrator | changed: [testbed-node-3] => { 2025-09-13 00:42:43.329884 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-13 00:42:43.329895 | orchestrator |  "ceph_osd_devices": { 2025-09-13 00:42:43.329906 | orchestrator |  "sdb": { 2025-09-13 00:42:43.329916 | orchestrator |  "osd_lvm_uuid": "741132e6-4e77-5ad5-aab1-a12c98657a1e" 2025-09-13 00:42:43.329927 | orchestrator |  }, 2025-09-13 00:42:43.329938 | orchestrator |  "sdc": { 2025-09-13 00:42:43.329949 | orchestrator |  "osd_lvm_uuid": "c9c3f5f4-a401-5886-82fa-33c7ca08590f" 2025-09-13 00:42:43.329960 | orchestrator |  } 2025-09-13 00:42:43.329971 | orchestrator |  }, 2025-09-13 00:42:43.329981 | orchestrator |  "lvm_volumes": [ 2025-09-13 00:42:43.329992 | orchestrator |  { 2025-09-13 00:42:43.330003 | orchestrator |  "data": "osd-block-741132e6-4e77-5ad5-aab1-a12c98657a1e", 2025-09-13 00:42:43.330013 | orchestrator |  "data_vg": "ceph-741132e6-4e77-5ad5-aab1-a12c98657a1e" 2025-09-13 00:42:43.330086 | orchestrator |  }, 2025-09-13 00:42:43.330098 | orchestrator |  { 2025-09-13 00:42:43.330109 | orchestrator |  "data": "osd-block-c9c3f5f4-a401-5886-82fa-33c7ca08590f", 2025-09-13 00:42:43.330120 | orchestrator |  "data_vg": "ceph-c9c3f5f4-a401-5886-82fa-33c7ca08590f" 2025-09-13 00:42:43.330130 | orchestrator |  } 2025-09-13 00:42:43.330141 | orchestrator |  ] 2025-09-13 00:42:43.330152 | orchestrator |  } 2025-09-13 00:42:43.330162 | orchestrator | } 2025-09-13 00:42:43.330173 | orchestrator | 2025-09-13 00:42:43.330184 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-13 00:42:43.330203 | orchestrator | Saturday 13 September 2025 00:42:40 +0000 (0:00:00.203) 0:00:12.468 **** 2025-09-13 00:42:43.330214 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-13 00:42:43.330224 | orchestrator | 2025-09-13 00:42:43.330235 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-13 00:42:43.330246 | orchestrator | 2025-09-13 00:42:43.330257 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-13 00:42:43.330267 | orchestrator | Saturday 13 September 2025 00:42:42 +0000 (0:00:02.403) 0:00:14.872 **** 2025-09-13 00:42:43.330278 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-13 00:42:43.330288 | orchestrator | 2025-09-13 00:42:43.330299 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-13 00:42:43.330310 | orchestrator | Saturday 13 September 2025 00:42:43 +0000 (0:00:00.253) 0:00:15.126 **** 2025-09-13 00:42:43.330320 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:42:43.330331 | orchestrator | 2025-09-13 00:42:43.330342 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:42:43.330361 | orchestrator | Saturday 13 September 2025 00:42:43 +0000 (0:00:00.220) 0:00:15.346 **** 2025-09-13 00:42:51.354611 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-13 00:42:51.354727 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-13 00:42:51.354795 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-13 00:42:51.354807 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-13 00:42:51.354819 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-13 00:42:51.354830 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-13 00:42:51.354841 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-13 00:42:51.354852 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-13 00:42:51.354863 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-13 00:42:51.354875 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-13 00:42:51.354907 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-13 00:42:51.354919 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-13 00:42:51.354930 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-13 00:42:51.354945 | orchestrator | 2025-09-13 00:42:51.354958 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:42:51.354971 | orchestrator | Saturday 13 September 2025 00:42:43 +0000 (0:00:00.384) 0:00:15.731 **** 2025-09-13 00:42:51.354982 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:42:51.354994 | orchestrator | 2025-09-13 00:42:51.355005 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:42:51.355016 | orchestrator | Saturday 13 September 2025 00:42:43 +0000 (0:00:00.202) 0:00:15.933 **** 2025-09-13 00:42:51.355027 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:42:51.355038 | orchestrator | 2025-09-13 00:42:51.355049 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:42:51.355060 | orchestrator | Saturday 13 September 2025 00:42:44 +0000 (0:00:00.181) 0:00:16.114 **** 2025-09-13 00:42:51.355072 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:42:51.355083 | orchestrator | 2025-09-13 00:42:51.355094 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:42:51.355104 | orchestrator | Saturday 13 September 2025 00:42:44 +0000 (0:00:00.188) 0:00:16.303 **** 2025-09-13 00:42:51.355116 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:42:51.355150 | orchestrator | 2025-09-13 00:42:51.355165 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:42:51.355178 | orchestrator | Saturday 13 September 2025 00:42:44 +0000 (0:00:00.237) 0:00:16.541 **** 2025-09-13 00:42:51.355190 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:42:51.355202 | orchestrator | 2025-09-13 00:42:51.355215 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:42:51.355227 | orchestrator | Saturday 13 September 2025 00:42:45 +0000 (0:00:00.599) 0:00:17.141 **** 2025-09-13 00:42:51.355239 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:42:51.355252 | orchestrator | 2025-09-13 00:42:51.355264 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:42:51.355276 | orchestrator | Saturday 13 September 2025 00:42:45 +0000 (0:00:00.199) 0:00:17.340 **** 2025-09-13 00:42:51.355288 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:42:51.355300 | orchestrator | 2025-09-13 00:42:51.355312 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:42:51.355325 | orchestrator | Saturday 13 September 2025 00:42:45 +0000 (0:00:00.198) 0:00:17.538 **** 2025-09-13 00:42:51.355337 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:42:51.355349 | orchestrator | 2025-09-13 00:42:51.355361 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:42:51.355373 | orchestrator | Saturday 13 September 2025 00:42:45 +0000 (0:00:00.222) 0:00:17.760 **** 2025-09-13 00:42:51.355386 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8) 2025-09-13 00:42:51.355399 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8) 2025-09-13 00:42:51.355411 | orchestrator | 2025-09-13 00:42:51.355423 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:42:51.355435 | orchestrator | Saturday 13 September 2025 00:42:46 +0000 (0:00:00.428) 0:00:18.189 **** 2025-09-13 00:42:51.355448 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e924364d-2e91-46ce-bd4b-cca5d229d1e6) 2025-09-13 00:42:51.355461 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e924364d-2e91-46ce-bd4b-cca5d229d1e6) 2025-09-13 00:42:51.355473 | orchestrator | 2025-09-13 00:42:51.355485 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:42:51.355498 | orchestrator | Saturday 13 September 2025 00:42:46 +0000 (0:00:00.426) 0:00:18.616 **** 2025-09-13 00:42:51.355509 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f868cbab-65ba-4325-b003-03d97073cddb) 2025-09-13 00:42:51.355520 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f868cbab-65ba-4325-b003-03d97073cddb) 2025-09-13 00:42:51.355531 | orchestrator | 2025-09-13 00:42:51.355542 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:42:51.355553 | orchestrator | Saturday 13 September 2025 00:42:47 +0000 (0:00:00.451) 0:00:19.067 **** 2025-09-13 00:42:51.355581 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5a3f219a-02e3-456c-9d7f-0c5a8049cd2b) 2025-09-13 00:42:51.355593 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5a3f219a-02e3-456c-9d7f-0c5a8049cd2b) 2025-09-13 00:42:51.355604 | orchestrator | 2025-09-13 00:42:51.355615 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:42:51.355627 | orchestrator | Saturday 13 September 2025 00:42:47 +0000 (0:00:00.459) 0:00:19.526 **** 2025-09-13 00:42:51.355637 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-13 00:42:51.355648 | orchestrator | 2025-09-13 00:42:51.355659 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:42:51.355676 | orchestrator | Saturday 13 September 2025 00:42:47 +0000 (0:00:00.325) 0:00:19.851 **** 2025-09-13 00:42:51.355687 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-13 00:42:51.355706 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-13 00:42:51.355717 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-13 00:42:51.355727 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-13 00:42:51.355756 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-13 00:42:51.355768 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-13 00:42:51.355779 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-13 00:42:51.355790 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-13 00:42:51.355800 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-13 00:42:51.355811 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-13 00:42:51.355822 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-13 00:42:51.355833 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-13 00:42:51.355843 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-13 00:42:51.355854 | orchestrator | 2025-09-13 00:42:51.355865 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:42:51.355876 | orchestrator | Saturday 13 September 2025 00:42:48 +0000 (0:00:00.375) 0:00:20.227 **** 2025-09-13 00:42:51.355886 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:42:51.355897 | orchestrator | 2025-09-13 00:42:51.355908 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:42:51.355919 | orchestrator | Saturday 13 September 2025 00:42:48 +0000 (0:00:00.200) 0:00:20.427 **** 2025-09-13 00:42:51.355929 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:42:51.355940 | orchestrator | 2025-09-13 00:42:51.355951 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:42:51.355962 | orchestrator | Saturday 13 September 2025 00:42:49 +0000 (0:00:00.637) 0:00:21.064 **** 2025-09-13 00:42:51.355972 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:42:51.355983 | orchestrator | 2025-09-13 00:42:51.355994 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:42:51.356005 | orchestrator | Saturday 13 September 2025 00:42:49 +0000 (0:00:00.209) 0:00:21.274 **** 2025-09-13 00:42:51.356015 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:42:51.356026 | orchestrator | 2025-09-13 00:42:51.356037 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:42:51.356048 | orchestrator | Saturday 13 September 2025 00:42:49 +0000 (0:00:00.196) 0:00:21.470 **** 2025-09-13 00:42:51.356059 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:42:51.356070 | orchestrator | 2025-09-13 00:42:51.356080 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:42:51.356091 | orchestrator | Saturday 13 September 2025 00:42:49 +0000 (0:00:00.217) 0:00:21.688 **** 2025-09-13 00:42:51.356102 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:42:51.356113 | orchestrator | 2025-09-13 00:42:51.356123 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:42:51.356134 | orchestrator | Saturday 13 September 2025 00:42:49 +0000 (0:00:00.218) 0:00:21.906 **** 2025-09-13 00:42:51.356145 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:42:51.356155 | orchestrator | 2025-09-13 00:42:51.356166 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:42:51.356177 | orchestrator | Saturday 13 September 2025 00:42:50 +0000 (0:00:00.273) 0:00:22.180 **** 2025-09-13 00:42:51.356188 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:42:51.356199 | orchestrator | 2025-09-13 00:42:51.356210 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:42:51.356227 | orchestrator | Saturday 13 September 2025 00:42:50 +0000 (0:00:00.260) 0:00:22.441 **** 2025-09-13 00:42:51.356238 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-13 00:42:51.356249 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-13 00:42:51.356261 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-13 00:42:51.356272 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-13 00:42:51.356282 | orchestrator | 2025-09-13 00:42:51.356293 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:42:51.356304 | orchestrator | Saturday 13 September 2025 00:42:51 +0000 (0:00:00.735) 0:00:23.177 **** 2025-09-13 00:42:51.356315 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:42:51.356326 | orchestrator | 2025-09-13 00:42:51.356343 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:42:56.461511 | orchestrator | Saturday 13 September 2025 00:42:51 +0000 (0:00:00.196) 0:00:23.373 **** 2025-09-13 00:42:56.461620 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:42:56.461636 | orchestrator | 2025-09-13 00:42:56.461648 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:42:56.461660 | orchestrator | Saturday 13 September 2025 00:42:51 +0000 (0:00:00.167) 0:00:23.541 **** 2025-09-13 00:42:56.461671 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:42:56.461682 | orchestrator | 2025-09-13 00:42:56.461694 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:42:56.461705 | orchestrator | Saturday 13 September 2025 00:42:51 +0000 (0:00:00.154) 0:00:23.696 **** 2025-09-13 00:42:56.461716 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:42:56.461727 | orchestrator | 2025-09-13 00:42:56.461795 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-13 00:42:56.461811 | orchestrator | Saturday 13 September 2025 00:42:51 +0000 (0:00:00.205) 0:00:23.901 **** 2025-09-13 00:42:56.461822 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-09-13 00:42:56.461833 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-09-13 00:42:56.461844 | orchestrator | 2025-09-13 00:42:56.461855 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-13 00:42:56.461867 | orchestrator | Saturday 13 September 2025 00:42:52 +0000 (0:00:00.279) 0:00:24.180 **** 2025-09-13 00:42:56.461878 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:42:56.461889 | orchestrator | 2025-09-13 00:42:56.461900 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-13 00:42:56.461911 | orchestrator | Saturday 13 September 2025 00:42:52 +0000 (0:00:00.124) 0:00:24.305 **** 2025-09-13 00:42:56.461923 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:42:56.461934 | orchestrator | 2025-09-13 00:42:56.461945 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-13 00:42:56.461956 | orchestrator | Saturday 13 September 2025 00:42:52 +0000 (0:00:00.112) 0:00:24.417 **** 2025-09-13 00:42:56.461967 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:42:56.461978 | orchestrator | 2025-09-13 00:42:56.461989 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-13 00:42:56.462000 | orchestrator | Saturday 13 September 2025 00:42:52 +0000 (0:00:00.109) 0:00:24.527 **** 2025-09-13 00:42:56.462011 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:42:56.462091 | orchestrator | 2025-09-13 00:42:56.462115 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-13 00:42:56.462136 | orchestrator | Saturday 13 September 2025 00:42:52 +0000 (0:00:00.107) 0:00:24.634 **** 2025-09-13 00:42:56.462156 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b9d4bd55-4398-5073-b181-64dcd216e500'}}) 2025-09-13 00:42:56.462170 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b087737a-96b5-5170-ab1c-c312068a0bca'}}) 2025-09-13 00:42:56.462183 | orchestrator | 2025-09-13 00:42:56.462195 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-13 00:42:56.462228 | orchestrator | Saturday 13 September 2025 00:42:52 +0000 (0:00:00.143) 0:00:24.777 **** 2025-09-13 00:42:56.462241 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b9d4bd55-4398-5073-b181-64dcd216e500'}})  2025-09-13 00:42:56.462255 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b087737a-96b5-5170-ab1c-c312068a0bca'}})  2025-09-13 00:42:56.462267 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:42:56.462279 | orchestrator | 2025-09-13 00:42:56.462292 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-13 00:42:56.462304 | orchestrator | Saturday 13 September 2025 00:42:52 +0000 (0:00:00.115) 0:00:24.893 **** 2025-09-13 00:42:56.462316 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b9d4bd55-4398-5073-b181-64dcd216e500'}})  2025-09-13 00:42:56.462329 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b087737a-96b5-5170-ab1c-c312068a0bca'}})  2025-09-13 00:42:56.462341 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:42:56.462352 | orchestrator | 2025-09-13 00:42:56.462365 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-13 00:42:56.462377 | orchestrator | Saturday 13 September 2025 00:42:52 +0000 (0:00:00.119) 0:00:25.012 **** 2025-09-13 00:42:56.462390 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b9d4bd55-4398-5073-b181-64dcd216e500'}})  2025-09-13 00:42:56.462402 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b087737a-96b5-5170-ab1c-c312068a0bca'}})  2025-09-13 00:42:56.462415 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:42:56.462426 | orchestrator | 2025-09-13 00:42:56.462437 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-13 00:42:56.462447 | orchestrator | Saturday 13 September 2025 00:42:53 +0000 (0:00:00.118) 0:00:25.130 **** 2025-09-13 00:42:56.462458 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:42:56.462469 | orchestrator | 2025-09-13 00:42:56.462480 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-13 00:42:56.462490 | orchestrator | Saturday 13 September 2025 00:42:53 +0000 (0:00:00.103) 0:00:25.234 **** 2025-09-13 00:42:56.462501 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:42:56.462512 | orchestrator | 2025-09-13 00:42:56.462523 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-13 00:42:56.462534 | orchestrator | Saturday 13 September 2025 00:42:53 +0000 (0:00:00.115) 0:00:25.350 **** 2025-09-13 00:42:56.462545 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:42:56.462555 | orchestrator | 2025-09-13 00:42:56.462586 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-13 00:42:56.462597 | orchestrator | Saturday 13 September 2025 00:42:53 +0000 (0:00:00.101) 0:00:25.451 **** 2025-09-13 00:42:56.462608 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:42:56.462619 | orchestrator | 2025-09-13 00:42:56.462630 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-13 00:42:56.462641 | orchestrator | Saturday 13 September 2025 00:42:53 +0000 (0:00:00.257) 0:00:25.708 **** 2025-09-13 00:42:56.462651 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:42:56.462662 | orchestrator | 2025-09-13 00:42:56.462673 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-13 00:42:56.462684 | orchestrator | Saturday 13 September 2025 00:42:53 +0000 (0:00:00.106) 0:00:25.815 **** 2025-09-13 00:42:56.462695 | orchestrator | ok: [testbed-node-4] => { 2025-09-13 00:42:56.462705 | orchestrator |  "ceph_osd_devices": { 2025-09-13 00:42:56.462716 | orchestrator |  "sdb": { 2025-09-13 00:42:56.462727 | orchestrator |  "osd_lvm_uuid": "b9d4bd55-4398-5073-b181-64dcd216e500" 2025-09-13 00:42:56.462738 | orchestrator |  }, 2025-09-13 00:42:56.462772 | orchestrator |  "sdc": { 2025-09-13 00:42:56.462792 | orchestrator |  "osd_lvm_uuid": "b087737a-96b5-5170-ab1c-c312068a0bca" 2025-09-13 00:42:56.462803 | orchestrator |  } 2025-09-13 00:42:56.462814 | orchestrator |  } 2025-09-13 00:42:56.462825 | orchestrator | } 2025-09-13 00:42:56.462836 | orchestrator | 2025-09-13 00:42:56.462847 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-13 00:42:56.462858 | orchestrator | Saturday 13 September 2025 00:42:53 +0000 (0:00:00.115) 0:00:25.931 **** 2025-09-13 00:42:56.462869 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:42:56.462879 | orchestrator | 2025-09-13 00:42:56.462897 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-13 00:42:56.462909 | orchestrator | Saturday 13 September 2025 00:42:54 +0000 (0:00:00.123) 0:00:26.055 **** 2025-09-13 00:42:56.462920 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:42:56.462930 | orchestrator | 2025-09-13 00:42:56.462941 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-13 00:42:56.462952 | orchestrator | Saturday 13 September 2025 00:42:54 +0000 (0:00:00.105) 0:00:26.161 **** 2025-09-13 00:42:56.462962 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:42:56.462973 | orchestrator | 2025-09-13 00:42:56.462984 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-13 00:42:56.462995 | orchestrator | Saturday 13 September 2025 00:42:54 +0000 (0:00:00.099) 0:00:26.261 **** 2025-09-13 00:42:56.463005 | orchestrator | changed: [testbed-node-4] => { 2025-09-13 00:42:56.463016 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-13 00:42:56.463026 | orchestrator |  "ceph_osd_devices": { 2025-09-13 00:42:56.463037 | orchestrator |  "sdb": { 2025-09-13 00:42:56.463048 | orchestrator |  "osd_lvm_uuid": "b9d4bd55-4398-5073-b181-64dcd216e500" 2025-09-13 00:42:56.463064 | orchestrator |  }, 2025-09-13 00:42:56.463075 | orchestrator |  "sdc": { 2025-09-13 00:42:56.463086 | orchestrator |  "osd_lvm_uuid": "b087737a-96b5-5170-ab1c-c312068a0bca" 2025-09-13 00:42:56.463097 | orchestrator |  } 2025-09-13 00:42:56.463107 | orchestrator |  }, 2025-09-13 00:42:56.463118 | orchestrator |  "lvm_volumes": [ 2025-09-13 00:42:56.463129 | orchestrator |  { 2025-09-13 00:42:56.463140 | orchestrator |  "data": "osd-block-b9d4bd55-4398-5073-b181-64dcd216e500", 2025-09-13 00:42:56.463150 | orchestrator |  "data_vg": "ceph-b9d4bd55-4398-5073-b181-64dcd216e500" 2025-09-13 00:42:56.463161 | orchestrator |  }, 2025-09-13 00:42:56.463172 | orchestrator |  { 2025-09-13 00:42:56.463183 | orchestrator |  "data": "osd-block-b087737a-96b5-5170-ab1c-c312068a0bca", 2025-09-13 00:42:56.463193 | orchestrator |  "data_vg": "ceph-b087737a-96b5-5170-ab1c-c312068a0bca" 2025-09-13 00:42:56.463204 | orchestrator |  } 2025-09-13 00:42:56.463215 | orchestrator |  ] 2025-09-13 00:42:56.463225 | orchestrator |  } 2025-09-13 00:42:56.463236 | orchestrator | } 2025-09-13 00:42:56.463246 | orchestrator | 2025-09-13 00:42:56.463257 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-13 00:42:56.463268 | orchestrator | Saturday 13 September 2025 00:42:54 +0000 (0:00:00.170) 0:00:26.431 **** 2025-09-13 00:42:56.463279 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-13 00:42:56.463289 | orchestrator | 2025-09-13 00:42:56.463300 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-13 00:42:56.463310 | orchestrator | 2025-09-13 00:42:56.463321 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-13 00:42:56.463332 | orchestrator | Saturday 13 September 2025 00:42:55 +0000 (0:00:00.889) 0:00:27.321 **** 2025-09-13 00:42:56.463343 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-13 00:42:56.463353 | orchestrator | 2025-09-13 00:42:56.463364 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-13 00:42:56.463374 | orchestrator | Saturday 13 September 2025 00:42:55 +0000 (0:00:00.354) 0:00:27.676 **** 2025-09-13 00:42:56.463392 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:42:56.463403 | orchestrator | 2025-09-13 00:42:56.463413 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:42:56.463424 | orchestrator | Saturday 13 September 2025 00:42:56 +0000 (0:00:00.477) 0:00:28.154 **** 2025-09-13 00:42:56.463435 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-13 00:42:56.463445 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-13 00:42:56.463456 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-13 00:42:56.463467 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-13 00:42:56.463477 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-13 00:42:56.463488 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-13 00:42:56.463505 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-13 00:43:03.595302 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-13 00:43:03.595418 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-13 00:43:03.595434 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-13 00:43:03.595446 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-13 00:43:03.595456 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-13 00:43:03.595467 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-13 00:43:03.595479 | orchestrator | 2025-09-13 00:43:03.595491 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:43:03.595502 | orchestrator | Saturday 13 September 2025 00:42:56 +0000 (0:00:00.326) 0:00:28.480 **** 2025-09-13 00:43:03.595514 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:43:03.595526 | orchestrator | 2025-09-13 00:43:03.595537 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:43:03.595548 | orchestrator | Saturday 13 September 2025 00:42:56 +0000 (0:00:00.161) 0:00:28.642 **** 2025-09-13 00:43:03.595559 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:43:03.595570 | orchestrator | 2025-09-13 00:43:03.595581 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:43:03.595592 | orchestrator | Saturday 13 September 2025 00:42:56 +0000 (0:00:00.200) 0:00:28.842 **** 2025-09-13 00:43:03.595603 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:43:03.595614 | orchestrator | 2025-09-13 00:43:03.595625 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:43:03.595636 | orchestrator | Saturday 13 September 2025 00:42:57 +0000 (0:00:00.219) 0:00:29.061 **** 2025-09-13 00:43:03.595647 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:43:03.595658 | orchestrator | 2025-09-13 00:43:03.595668 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:43:03.595679 | orchestrator | Saturday 13 September 2025 00:42:57 +0000 (0:00:00.158) 0:00:29.219 **** 2025-09-13 00:43:03.595690 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:43:03.595701 | orchestrator | 2025-09-13 00:43:03.595712 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:43:03.595722 | orchestrator | Saturday 13 September 2025 00:42:57 +0000 (0:00:00.168) 0:00:29.388 **** 2025-09-13 00:43:03.595733 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:43:03.595744 | orchestrator | 2025-09-13 00:43:03.595813 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:43:03.595826 | orchestrator | Saturday 13 September 2025 00:42:57 +0000 (0:00:00.160) 0:00:29.548 **** 2025-09-13 00:43:03.595839 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:43:03.595874 | orchestrator | 2025-09-13 00:43:03.595887 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:43:03.595899 | orchestrator | Saturday 13 September 2025 00:42:57 +0000 (0:00:00.130) 0:00:29.679 **** 2025-09-13 00:43:03.595912 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:43:03.595925 | orchestrator | 2025-09-13 00:43:03.595955 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:43:03.595969 | orchestrator | Saturday 13 September 2025 00:42:57 +0000 (0:00:00.145) 0:00:29.825 **** 2025-09-13 00:43:03.595982 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96) 2025-09-13 00:43:03.595997 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96) 2025-09-13 00:43:03.596009 | orchestrator | 2025-09-13 00:43:03.596022 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:43:03.596035 | orchestrator | Saturday 13 September 2025 00:42:58 +0000 (0:00:00.453) 0:00:30.279 **** 2025-09-13 00:43:03.596048 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1763dbba-d504-4b6d-865a-93cad2d65fc8) 2025-09-13 00:43:03.596062 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1763dbba-d504-4b6d-865a-93cad2d65fc8) 2025-09-13 00:43:03.596074 | orchestrator | 2025-09-13 00:43:03.596086 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:43:03.596099 | orchestrator | Saturday 13 September 2025 00:42:58 +0000 (0:00:00.611) 0:00:30.891 **** 2025-09-13 00:43:03.596112 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c5da3e8c-99b7-4761-a17c-7637f0eb6556) 2025-09-13 00:43:03.596124 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c5da3e8c-99b7-4761-a17c-7637f0eb6556) 2025-09-13 00:43:03.596136 | orchestrator | 2025-09-13 00:43:03.596149 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:43:03.596161 | orchestrator | Saturday 13 September 2025 00:42:59 +0000 (0:00:00.367) 0:00:31.258 **** 2025-09-13 00:43:03.596174 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9346358d-8291-41dd-be96-0d8c84c54113) 2025-09-13 00:43:03.596187 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9346358d-8291-41dd-be96-0d8c84c54113) 2025-09-13 00:43:03.596198 | orchestrator | 2025-09-13 00:43:03.596209 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:43:03.596220 | orchestrator | Saturday 13 September 2025 00:42:59 +0000 (0:00:00.387) 0:00:31.645 **** 2025-09-13 00:43:03.596231 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-13 00:43:03.596242 | orchestrator | 2025-09-13 00:43:03.596252 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:43:03.596263 | orchestrator | Saturday 13 September 2025 00:42:59 +0000 (0:00:00.311) 0:00:31.957 **** 2025-09-13 00:43:03.596293 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-13 00:43:03.596305 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-13 00:43:03.596316 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-13 00:43:03.596327 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-13 00:43:03.596338 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-13 00:43:03.596349 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-13 00:43:03.596359 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-13 00:43:03.596370 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-13 00:43:03.596382 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-13 00:43:03.596403 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-13 00:43:03.596414 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-13 00:43:03.596425 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-13 00:43:03.596436 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-13 00:43:03.596447 | orchestrator | 2025-09-13 00:43:03.596458 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:43:03.596469 | orchestrator | Saturday 13 September 2025 00:43:00 +0000 (0:00:00.357) 0:00:32.315 **** 2025-09-13 00:43:03.596480 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:43:03.596490 | orchestrator | 2025-09-13 00:43:03.596501 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:43:03.596512 | orchestrator | Saturday 13 September 2025 00:43:00 +0000 (0:00:00.202) 0:00:32.517 **** 2025-09-13 00:43:03.596523 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:43:03.596534 | orchestrator | 2025-09-13 00:43:03.596545 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:43:03.596556 | orchestrator | Saturday 13 September 2025 00:43:00 +0000 (0:00:00.192) 0:00:32.710 **** 2025-09-13 00:43:03.596567 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:43:03.596578 | orchestrator | 2025-09-13 00:43:03.596589 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:43:03.596600 | orchestrator | Saturday 13 September 2025 00:43:00 +0000 (0:00:00.215) 0:00:32.925 **** 2025-09-13 00:43:03.596611 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:43:03.596621 | orchestrator | 2025-09-13 00:43:03.596632 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:43:03.596643 | orchestrator | Saturday 13 September 2025 00:43:01 +0000 (0:00:00.162) 0:00:33.088 **** 2025-09-13 00:43:03.596654 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:43:03.596665 | orchestrator | 2025-09-13 00:43:03.596676 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:43:03.596687 | orchestrator | Saturday 13 September 2025 00:43:01 +0000 (0:00:00.169) 0:00:33.257 **** 2025-09-13 00:43:03.596698 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:43:03.596709 | orchestrator | 2025-09-13 00:43:03.596720 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:43:03.596730 | orchestrator | Saturday 13 September 2025 00:43:01 +0000 (0:00:00.502) 0:00:33.759 **** 2025-09-13 00:43:03.596741 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:43:03.596776 | orchestrator | 2025-09-13 00:43:03.596787 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:43:03.596798 | orchestrator | Saturday 13 September 2025 00:43:01 +0000 (0:00:00.179) 0:00:33.939 **** 2025-09-13 00:43:03.596809 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:43:03.596820 | orchestrator | 2025-09-13 00:43:03.596831 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:43:03.596843 | orchestrator | Saturday 13 September 2025 00:43:02 +0000 (0:00:00.184) 0:00:34.123 **** 2025-09-13 00:43:03.596854 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-13 00:43:03.596865 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-13 00:43:03.596876 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-13 00:43:03.596887 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-13 00:43:03.596898 | orchestrator | 2025-09-13 00:43:03.596909 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:43:03.596920 | orchestrator | Saturday 13 September 2025 00:43:02 +0000 (0:00:00.696) 0:00:34.820 **** 2025-09-13 00:43:03.596931 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:43:03.596942 | orchestrator | 2025-09-13 00:43:03.596953 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:43:03.596970 | orchestrator | Saturday 13 September 2025 00:43:02 +0000 (0:00:00.186) 0:00:35.007 **** 2025-09-13 00:43:03.596981 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:43:03.596992 | orchestrator | 2025-09-13 00:43:03.597003 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:43:03.597014 | orchestrator | Saturday 13 September 2025 00:43:03 +0000 (0:00:00.201) 0:00:35.208 **** 2025-09-13 00:43:03.597025 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:43:03.597036 | orchestrator | 2025-09-13 00:43:03.597047 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:43:03.597058 | orchestrator | Saturday 13 September 2025 00:43:03 +0000 (0:00:00.220) 0:00:35.428 **** 2025-09-13 00:43:03.597075 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:43:03.597086 | orchestrator | 2025-09-13 00:43:03.597097 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-13 00:43:03.597114 | orchestrator | Saturday 13 September 2025 00:43:03 +0000 (0:00:00.187) 0:00:35.615 **** 2025-09-13 00:43:08.153827 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-09-13 00:43:08.153930 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-09-13 00:43:08.153945 | orchestrator | 2025-09-13 00:43:08.153957 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-13 00:43:08.153968 | orchestrator | Saturday 13 September 2025 00:43:03 +0000 (0:00:00.148) 0:00:35.763 **** 2025-09-13 00:43:08.153979 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:43:08.153991 | orchestrator | 2025-09-13 00:43:08.154002 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-13 00:43:08.154071 | orchestrator | Saturday 13 September 2025 00:43:03 +0000 (0:00:00.168) 0:00:35.932 **** 2025-09-13 00:43:08.154087 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:43:08.154098 | orchestrator | 2025-09-13 00:43:08.154109 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-13 00:43:08.154120 | orchestrator | Saturday 13 September 2025 00:43:04 +0000 (0:00:00.143) 0:00:36.075 **** 2025-09-13 00:43:08.154130 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:43:08.154141 | orchestrator | 2025-09-13 00:43:08.154152 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-13 00:43:08.154163 | orchestrator | Saturday 13 September 2025 00:43:04 +0000 (0:00:00.142) 0:00:36.218 **** 2025-09-13 00:43:08.154174 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:43:08.154186 | orchestrator | 2025-09-13 00:43:08.154197 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-13 00:43:08.154208 | orchestrator | Saturday 13 September 2025 00:43:04 +0000 (0:00:00.385) 0:00:36.603 **** 2025-09-13 00:43:08.154220 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4283f495-c022-53d0-a3fe-4c36d70cad8f'}}) 2025-09-13 00:43:08.154232 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7339ba9f-b6a9-52d7-bde1-e21ae438ff7a'}}) 2025-09-13 00:43:08.154243 | orchestrator | 2025-09-13 00:43:08.154254 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-13 00:43:08.154264 | orchestrator | Saturday 13 September 2025 00:43:04 +0000 (0:00:00.238) 0:00:36.841 **** 2025-09-13 00:43:08.154276 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4283f495-c022-53d0-a3fe-4c36d70cad8f'}})  2025-09-13 00:43:08.154289 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7339ba9f-b6a9-52d7-bde1-e21ae438ff7a'}})  2025-09-13 00:43:08.154300 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:43:08.154313 | orchestrator | 2025-09-13 00:43:08.154343 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-13 00:43:08.154357 | orchestrator | Saturday 13 September 2025 00:43:04 +0000 (0:00:00.155) 0:00:36.997 **** 2025-09-13 00:43:08.154370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4283f495-c022-53d0-a3fe-4c36d70cad8f'}})  2025-09-13 00:43:08.154403 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7339ba9f-b6a9-52d7-bde1-e21ae438ff7a'}})  2025-09-13 00:43:08.154416 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:43:08.154428 | orchestrator | 2025-09-13 00:43:08.154440 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-13 00:43:08.154452 | orchestrator | Saturday 13 September 2025 00:43:05 +0000 (0:00:00.176) 0:00:37.174 **** 2025-09-13 00:43:08.154464 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4283f495-c022-53d0-a3fe-4c36d70cad8f'}})  2025-09-13 00:43:08.154476 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7339ba9f-b6a9-52d7-bde1-e21ae438ff7a'}})  2025-09-13 00:43:08.154489 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:43:08.154501 | orchestrator | 2025-09-13 00:43:08.154513 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-13 00:43:08.154526 | orchestrator | Saturday 13 September 2025 00:43:05 +0000 (0:00:00.186) 0:00:37.361 **** 2025-09-13 00:43:08.154538 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:43:08.154550 | orchestrator | 2025-09-13 00:43:08.154562 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-13 00:43:08.154575 | orchestrator | Saturday 13 September 2025 00:43:05 +0000 (0:00:00.196) 0:00:37.558 **** 2025-09-13 00:43:08.154587 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:43:08.154601 | orchestrator | 2025-09-13 00:43:08.154613 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-13 00:43:08.154626 | orchestrator | Saturday 13 September 2025 00:43:05 +0000 (0:00:00.145) 0:00:37.703 **** 2025-09-13 00:43:08.154638 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:43:08.154651 | orchestrator | 2025-09-13 00:43:08.154664 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-13 00:43:08.154675 | orchestrator | Saturday 13 September 2025 00:43:05 +0000 (0:00:00.125) 0:00:37.828 **** 2025-09-13 00:43:08.154686 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:43:08.154696 | orchestrator | 2025-09-13 00:43:08.154707 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-13 00:43:08.154718 | orchestrator | Saturday 13 September 2025 00:43:05 +0000 (0:00:00.175) 0:00:38.004 **** 2025-09-13 00:43:08.154729 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:43:08.154739 | orchestrator | 2025-09-13 00:43:08.154750 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-13 00:43:08.154808 | orchestrator | Saturday 13 September 2025 00:43:06 +0000 (0:00:00.183) 0:00:38.188 **** 2025-09-13 00:43:08.154820 | orchestrator | ok: [testbed-node-5] => { 2025-09-13 00:43:08.154831 | orchestrator |  "ceph_osd_devices": { 2025-09-13 00:43:08.154842 | orchestrator |  "sdb": { 2025-09-13 00:43:08.154854 | orchestrator |  "osd_lvm_uuid": "4283f495-c022-53d0-a3fe-4c36d70cad8f" 2025-09-13 00:43:08.154885 | orchestrator |  }, 2025-09-13 00:43:08.154897 | orchestrator |  "sdc": { 2025-09-13 00:43:08.154907 | orchestrator |  "osd_lvm_uuid": "7339ba9f-b6a9-52d7-bde1-e21ae438ff7a" 2025-09-13 00:43:08.154918 | orchestrator |  } 2025-09-13 00:43:08.154929 | orchestrator |  } 2025-09-13 00:43:08.154940 | orchestrator | } 2025-09-13 00:43:08.154951 | orchestrator | 2025-09-13 00:43:08.154962 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-13 00:43:08.154973 | orchestrator | Saturday 13 September 2025 00:43:06 +0000 (0:00:00.168) 0:00:38.356 **** 2025-09-13 00:43:08.154984 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:43:08.154994 | orchestrator | 2025-09-13 00:43:08.155005 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-13 00:43:08.155016 | orchestrator | Saturday 13 September 2025 00:43:06 +0000 (0:00:00.170) 0:00:38.527 **** 2025-09-13 00:43:08.155027 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:43:08.155038 | orchestrator | 2025-09-13 00:43:08.155048 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-13 00:43:08.155111 | orchestrator | Saturday 13 September 2025 00:43:06 +0000 (0:00:00.408) 0:00:38.935 **** 2025-09-13 00:43:08.155122 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:43:08.155133 | orchestrator | 2025-09-13 00:43:08.155144 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-13 00:43:08.155155 | orchestrator | Saturday 13 September 2025 00:43:07 +0000 (0:00:00.130) 0:00:39.065 **** 2025-09-13 00:43:08.155166 | orchestrator | changed: [testbed-node-5] => { 2025-09-13 00:43:08.155176 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-13 00:43:08.155187 | orchestrator |  "ceph_osd_devices": { 2025-09-13 00:43:08.155198 | orchestrator |  "sdb": { 2025-09-13 00:43:08.155209 | orchestrator |  "osd_lvm_uuid": "4283f495-c022-53d0-a3fe-4c36d70cad8f" 2025-09-13 00:43:08.155219 | orchestrator |  }, 2025-09-13 00:43:08.155230 | orchestrator |  "sdc": { 2025-09-13 00:43:08.155241 | orchestrator |  "osd_lvm_uuid": "7339ba9f-b6a9-52d7-bde1-e21ae438ff7a" 2025-09-13 00:43:08.155252 | orchestrator |  } 2025-09-13 00:43:08.155262 | orchestrator |  }, 2025-09-13 00:43:08.155273 | orchestrator |  "lvm_volumes": [ 2025-09-13 00:43:08.155284 | orchestrator |  { 2025-09-13 00:43:08.155295 | orchestrator |  "data": "osd-block-4283f495-c022-53d0-a3fe-4c36d70cad8f", 2025-09-13 00:43:08.155305 | orchestrator |  "data_vg": "ceph-4283f495-c022-53d0-a3fe-4c36d70cad8f" 2025-09-13 00:43:08.155316 | orchestrator |  }, 2025-09-13 00:43:08.155326 | orchestrator |  { 2025-09-13 00:43:08.155337 | orchestrator |  "data": "osd-block-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a", 2025-09-13 00:43:08.155349 | orchestrator |  "data_vg": "ceph-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a" 2025-09-13 00:43:08.155360 | orchestrator |  } 2025-09-13 00:43:08.155370 | orchestrator |  ] 2025-09-13 00:43:08.155381 | orchestrator |  } 2025-09-13 00:43:08.155396 | orchestrator | } 2025-09-13 00:43:08.155408 | orchestrator | 2025-09-13 00:43:08.155419 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-13 00:43:08.155430 | orchestrator | Saturday 13 September 2025 00:43:07 +0000 (0:00:00.199) 0:00:39.265 **** 2025-09-13 00:43:08.155440 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-13 00:43:08.155451 | orchestrator | 2025-09-13 00:43:08.155462 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:43:08.155481 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-13 00:43:08.155494 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-13 00:43:08.155505 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-13 00:43:08.155516 | orchestrator | 2025-09-13 00:43:08.155526 | orchestrator | 2025-09-13 00:43:08.155537 | orchestrator | 2025-09-13 00:43:08.155548 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:43:08.155558 | orchestrator | Saturday 13 September 2025 00:43:08 +0000 (0:00:00.894) 0:00:40.159 **** 2025-09-13 00:43:08.155569 | orchestrator | =============================================================================== 2025-09-13 00:43:08.155580 | orchestrator | Write configuration file ------------------------------------------------ 4.19s 2025-09-13 00:43:08.155590 | orchestrator | Add known partitions to the list of available block devices ------------- 1.15s 2025-09-13 00:43:08.155601 | orchestrator | Add known links to the list of available block devices ------------------ 1.04s 2025-09-13 00:43:08.155612 | orchestrator | Add known partitions to the list of available block devices ------------- 0.97s 2025-09-13 00:43:08.155622 | orchestrator | Get initial list of available block devices ----------------------------- 0.89s 2025-09-13 00:43:08.155640 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.85s 2025-09-13 00:43:08.155651 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2025-09-13 00:43:08.155662 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2025-09-13 00:43:08.155672 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2025-09-13 00:43:08.155683 | orchestrator | Print DB devices -------------------------------------------------------- 0.65s 2025-09-13 00:43:08.155694 | orchestrator | Add known partitions to the list of available block devices ------------- 0.64s 2025-09-13 00:43:08.155704 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.64s 2025-09-13 00:43:08.155715 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.63s 2025-09-13 00:43:08.155726 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2025-09-13 00:43:08.155743 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2025-09-13 00:43:08.516982 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2025-09-13 00:43:08.517090 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2025-09-13 00:43:08.517105 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.60s 2025-09-13 00:43:08.517117 | orchestrator | Print configuration data ------------------------------------------------ 0.57s 2025-09-13 00:43:08.517129 | orchestrator | Set WAL devices config data --------------------------------------------- 0.57s 2025-09-13 00:43:30.969393 | orchestrator | 2025-09-13 00:43:30 | INFO  | Task e64d6a72-02a8-48b4-9d4e-c92c635db7e7 (sync inventory) is running in background. Output coming soon. 2025-09-13 00:43:54.514551 | orchestrator | 2025-09-13 00:43:32 | INFO  | Starting group_vars file reorganization 2025-09-13 00:43:54.514676 | orchestrator | 2025-09-13 00:43:32 | INFO  | Moved 0 file(s) to their respective directories 2025-09-13 00:43:54.514694 | orchestrator | 2025-09-13 00:43:32 | INFO  | Group_vars file reorganization completed 2025-09-13 00:43:54.514706 | orchestrator | 2025-09-13 00:43:34 | INFO  | Starting variable preparation from inventory 2025-09-13 00:43:54.514718 | orchestrator | 2025-09-13 00:43:37 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-09-13 00:43:54.514729 | orchestrator | 2025-09-13 00:43:37 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-09-13 00:43:54.514740 | orchestrator | 2025-09-13 00:43:37 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-09-13 00:43:54.514751 | orchestrator | 2025-09-13 00:43:37 | INFO  | 3 file(s) written, 6 host(s) processed 2025-09-13 00:43:54.514762 | orchestrator | 2025-09-13 00:43:37 | INFO  | Variable preparation completed 2025-09-13 00:43:54.514773 | orchestrator | 2025-09-13 00:43:38 | INFO  | Starting inventory overwrite handling 2025-09-13 00:43:54.514785 | orchestrator | 2025-09-13 00:43:38 | INFO  | Handling group overwrites in 99-overwrite 2025-09-13 00:43:54.514797 | orchestrator | 2025-09-13 00:43:38 | INFO  | Removing group frr:children from 60-generic 2025-09-13 00:43:54.514851 | orchestrator | 2025-09-13 00:43:38 | INFO  | Removing group storage:children from 50-kolla 2025-09-13 00:43:54.514863 | orchestrator | 2025-09-13 00:43:38 | INFO  | Removing group netbird:children from 50-infrastruture 2025-09-13 00:43:54.514874 | orchestrator | 2025-09-13 00:43:38 | INFO  | Removing group ceph-rgw from 50-ceph 2025-09-13 00:43:54.514885 | orchestrator | 2025-09-13 00:43:38 | INFO  | Removing group ceph-mds from 50-ceph 2025-09-13 00:43:54.514896 | orchestrator | 2025-09-13 00:43:38 | INFO  | Handling group overwrites in 20-roles 2025-09-13 00:43:54.514907 | orchestrator | 2025-09-13 00:43:38 | INFO  | Removing group k3s_node from 50-infrastruture 2025-09-13 00:43:54.514943 | orchestrator | 2025-09-13 00:43:38 | INFO  | Removed 6 group(s) in total 2025-09-13 00:43:54.514955 | orchestrator | 2025-09-13 00:43:38 | INFO  | Inventory overwrite handling completed 2025-09-13 00:43:54.514966 | orchestrator | 2025-09-13 00:43:39 | INFO  | Starting merge of inventory files 2025-09-13 00:43:54.514977 | orchestrator | 2025-09-13 00:43:39 | INFO  | Inventory files merged successfully 2025-09-13 00:43:54.514988 | orchestrator | 2025-09-13 00:43:43 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-09-13 00:43:54.514999 | orchestrator | 2025-09-13 00:43:53 | INFO  | Successfully wrote ClusterShell configuration 2025-09-13 00:43:54.515010 | orchestrator | [master 8142c68] 2025-09-13-00-43 2025-09-13 00:43:54.515022 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-09-13 00:43:56.835394 | orchestrator | 2025-09-13 00:43:56 | INFO  | Task 95cae072-8044-4a74-b0fe-d75c58484316 (ceph-create-lvm-devices) was prepared for execution. 2025-09-13 00:43:56.835495 | orchestrator | 2025-09-13 00:43:56 | INFO  | It takes a moment until task 95cae072-8044-4a74-b0fe-d75c58484316 (ceph-create-lvm-devices) has been started and output is visible here. 2025-09-13 00:44:07.577801 | orchestrator | 2025-09-13 00:44:07.577945 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-13 00:44:07.577963 | orchestrator | 2025-09-13 00:44:07.577975 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-13 00:44:07.577987 | orchestrator | Saturday 13 September 2025 00:44:00 +0000 (0:00:00.239) 0:00:00.239 **** 2025-09-13 00:44:07.577999 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-13 00:44:07.578010 | orchestrator | 2025-09-13 00:44:07.578070 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-13 00:44:07.578082 | orchestrator | Saturday 13 September 2025 00:44:01 +0000 (0:00:00.206) 0:00:00.446 **** 2025-09-13 00:44:07.578094 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:44:07.578106 | orchestrator | 2025-09-13 00:44:07.578117 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:07.578127 | orchestrator | Saturday 13 September 2025 00:44:01 +0000 (0:00:00.197) 0:00:00.644 **** 2025-09-13 00:44:07.578139 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-13 00:44:07.578151 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-13 00:44:07.578162 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-13 00:44:07.578173 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-13 00:44:07.578183 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-13 00:44:07.578194 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-13 00:44:07.578205 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-13 00:44:07.578216 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-13 00:44:07.578227 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-13 00:44:07.578237 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-13 00:44:07.578248 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-13 00:44:07.578259 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-13 00:44:07.578270 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-13 00:44:07.578280 | orchestrator | 2025-09-13 00:44:07.578291 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:07.578326 | orchestrator | Saturday 13 September 2025 00:44:01 +0000 (0:00:00.341) 0:00:00.986 **** 2025-09-13 00:44:07.578340 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:07.578354 | orchestrator | 2025-09-13 00:44:07.578366 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:07.578395 | orchestrator | Saturday 13 September 2025 00:44:01 +0000 (0:00:00.334) 0:00:01.321 **** 2025-09-13 00:44:07.578408 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:07.578421 | orchestrator | 2025-09-13 00:44:07.578433 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:07.578445 | orchestrator | Saturday 13 September 2025 00:44:02 +0000 (0:00:00.169) 0:00:01.490 **** 2025-09-13 00:44:07.578466 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:07.578480 | orchestrator | 2025-09-13 00:44:07.578493 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:07.578505 | orchestrator | Saturday 13 September 2025 00:44:02 +0000 (0:00:00.156) 0:00:01.646 **** 2025-09-13 00:44:07.578518 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:07.578530 | orchestrator | 2025-09-13 00:44:07.578542 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:07.578555 | orchestrator | Saturday 13 September 2025 00:44:02 +0000 (0:00:00.172) 0:00:01.819 **** 2025-09-13 00:44:07.578567 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:07.578579 | orchestrator | 2025-09-13 00:44:07.578592 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:07.578603 | orchestrator | Saturday 13 September 2025 00:44:02 +0000 (0:00:00.180) 0:00:01.999 **** 2025-09-13 00:44:07.578616 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:07.578628 | orchestrator | 2025-09-13 00:44:07.578641 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:07.578653 | orchestrator | Saturday 13 September 2025 00:44:02 +0000 (0:00:00.190) 0:00:02.189 **** 2025-09-13 00:44:07.578665 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:07.578677 | orchestrator | 2025-09-13 00:44:07.578690 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:07.578702 | orchestrator | Saturday 13 September 2025 00:44:03 +0000 (0:00:00.197) 0:00:02.386 **** 2025-09-13 00:44:07.578712 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:07.578723 | orchestrator | 2025-09-13 00:44:07.578734 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:07.578744 | orchestrator | Saturday 13 September 2025 00:44:03 +0000 (0:00:00.249) 0:00:02.636 **** 2025-09-13 00:44:07.578755 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d) 2025-09-13 00:44:07.578767 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d) 2025-09-13 00:44:07.578778 | orchestrator | 2025-09-13 00:44:07.578788 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:07.578799 | orchestrator | Saturday 13 September 2025 00:44:03 +0000 (0:00:00.435) 0:00:03.071 **** 2025-09-13 00:44:07.578846 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6e724704-b413-40a8-af93-f723a1c0b62f) 2025-09-13 00:44:07.578859 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6e724704-b413-40a8-af93-f723a1c0b62f) 2025-09-13 00:44:07.578870 | orchestrator | 2025-09-13 00:44:07.578880 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:07.578891 | orchestrator | Saturday 13 September 2025 00:44:04 +0000 (0:00:00.377) 0:00:03.448 **** 2025-09-13 00:44:07.578902 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e25c372e-2cb9-47f6-a0c5-1defd25ac71c) 2025-09-13 00:44:07.578913 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e25c372e-2cb9-47f6-a0c5-1defd25ac71c) 2025-09-13 00:44:07.578924 | orchestrator | 2025-09-13 00:44:07.578934 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:07.578954 | orchestrator | Saturday 13 September 2025 00:44:04 +0000 (0:00:00.561) 0:00:04.010 **** 2025-09-13 00:44:07.578965 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0c46d17e-adbc-49dd-8bd7-8befc745e964) 2025-09-13 00:44:07.578976 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0c46d17e-adbc-49dd-8bd7-8befc745e964) 2025-09-13 00:44:07.578987 | orchestrator | 2025-09-13 00:44:07.578997 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:07.579008 | orchestrator | Saturday 13 September 2025 00:44:05 +0000 (0:00:00.822) 0:00:04.832 **** 2025-09-13 00:44:07.579019 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-13 00:44:07.579030 | orchestrator | 2025-09-13 00:44:07.579040 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:44:07.579051 | orchestrator | Saturday 13 September 2025 00:44:05 +0000 (0:00:00.350) 0:00:05.183 **** 2025-09-13 00:44:07.579061 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-13 00:44:07.579072 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-13 00:44:07.579083 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-13 00:44:07.579093 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-13 00:44:07.579104 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-13 00:44:07.579115 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-13 00:44:07.579125 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-13 00:44:07.579136 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-13 00:44:07.579146 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-13 00:44:07.579157 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-13 00:44:07.579168 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-13 00:44:07.579178 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-13 00:44:07.579189 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-13 00:44:07.579200 | orchestrator | 2025-09-13 00:44:07.579210 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:44:07.579221 | orchestrator | Saturday 13 September 2025 00:44:06 +0000 (0:00:00.392) 0:00:05.575 **** 2025-09-13 00:44:07.579232 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:07.579243 | orchestrator | 2025-09-13 00:44:07.579254 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:44:07.579264 | orchestrator | Saturday 13 September 2025 00:44:06 +0000 (0:00:00.195) 0:00:05.771 **** 2025-09-13 00:44:07.579275 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:07.579286 | orchestrator | 2025-09-13 00:44:07.579296 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:44:07.579307 | orchestrator | Saturday 13 September 2025 00:44:06 +0000 (0:00:00.191) 0:00:05.962 **** 2025-09-13 00:44:07.579318 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:07.579329 | orchestrator | 2025-09-13 00:44:07.579339 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:44:07.579350 | orchestrator | Saturday 13 September 2025 00:44:06 +0000 (0:00:00.170) 0:00:06.132 **** 2025-09-13 00:44:07.579361 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:07.579371 | orchestrator | 2025-09-13 00:44:07.579382 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:44:07.579400 | orchestrator | Saturday 13 September 2025 00:44:06 +0000 (0:00:00.186) 0:00:06.319 **** 2025-09-13 00:44:07.579410 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:07.579421 | orchestrator | 2025-09-13 00:44:07.579432 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:44:07.579442 | orchestrator | Saturday 13 September 2025 00:44:07 +0000 (0:00:00.169) 0:00:06.488 **** 2025-09-13 00:44:07.579453 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:07.579464 | orchestrator | 2025-09-13 00:44:07.579474 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:44:07.579485 | orchestrator | Saturday 13 September 2025 00:44:07 +0000 (0:00:00.152) 0:00:06.641 **** 2025-09-13 00:44:07.579496 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:07.579506 | orchestrator | 2025-09-13 00:44:07.579517 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:44:07.579528 | orchestrator | Saturday 13 September 2025 00:44:07 +0000 (0:00:00.142) 0:00:06.783 **** 2025-09-13 00:44:07.579545 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:15.009699 | orchestrator | 2025-09-13 00:44:15.009806 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:44:15.009822 | orchestrator | Saturday 13 September 2025 00:44:07 +0000 (0:00:00.156) 0:00:06.940 **** 2025-09-13 00:44:15.009882 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-13 00:44:15.009895 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-13 00:44:15.009907 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-13 00:44:15.009919 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-13 00:44:15.009930 | orchestrator | 2025-09-13 00:44:15.009942 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:44:15.009953 | orchestrator | Saturday 13 September 2025 00:44:08 +0000 (0:00:00.846) 0:00:07.787 **** 2025-09-13 00:44:15.009964 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:15.009975 | orchestrator | 2025-09-13 00:44:15.009986 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:44:15.009997 | orchestrator | Saturday 13 September 2025 00:44:08 +0000 (0:00:00.202) 0:00:07.989 **** 2025-09-13 00:44:15.010008 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:15.010072 | orchestrator | 2025-09-13 00:44:15.010084 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:44:15.010096 | orchestrator | Saturday 13 September 2025 00:44:08 +0000 (0:00:00.262) 0:00:08.252 **** 2025-09-13 00:44:15.010107 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:15.010117 | orchestrator | 2025-09-13 00:44:15.010129 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:44:15.010140 | orchestrator | Saturday 13 September 2025 00:44:09 +0000 (0:00:00.195) 0:00:08.447 **** 2025-09-13 00:44:15.010151 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:15.010162 | orchestrator | 2025-09-13 00:44:15.010173 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-13 00:44:15.010184 | orchestrator | Saturday 13 September 2025 00:44:09 +0000 (0:00:00.287) 0:00:08.735 **** 2025-09-13 00:44:15.010195 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:15.010206 | orchestrator | 2025-09-13 00:44:15.010217 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-13 00:44:15.010228 | orchestrator | Saturday 13 September 2025 00:44:09 +0000 (0:00:00.143) 0:00:08.878 **** 2025-09-13 00:44:15.010240 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '741132e6-4e77-5ad5-aab1-a12c98657a1e'}}) 2025-09-13 00:44:15.010251 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c9c3f5f4-a401-5886-82fa-33c7ca08590f'}}) 2025-09-13 00:44:15.010262 | orchestrator | 2025-09-13 00:44:15.010273 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-13 00:44:15.010284 | orchestrator | Saturday 13 September 2025 00:44:09 +0000 (0:00:00.179) 0:00:09.057 **** 2025-09-13 00:44:15.010296 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-741132e6-4e77-5ad5-aab1-a12c98657a1e', 'data_vg': 'ceph-741132e6-4e77-5ad5-aab1-a12c98657a1e'}) 2025-09-13 00:44:15.010335 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c9c3f5f4-a401-5886-82fa-33c7ca08590f', 'data_vg': 'ceph-c9c3f5f4-a401-5886-82fa-33c7ca08590f'}) 2025-09-13 00:44:15.010346 | orchestrator | 2025-09-13 00:44:15.010372 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-13 00:44:15.010392 | orchestrator | Saturday 13 September 2025 00:44:11 +0000 (0:00:01.882) 0:00:10.940 **** 2025-09-13 00:44:15.010403 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-741132e6-4e77-5ad5-aab1-a12c98657a1e', 'data_vg': 'ceph-741132e6-4e77-5ad5-aab1-a12c98657a1e'})  2025-09-13 00:44:15.010415 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c9c3f5f4-a401-5886-82fa-33c7ca08590f', 'data_vg': 'ceph-c9c3f5f4-a401-5886-82fa-33c7ca08590f'})  2025-09-13 00:44:15.010426 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:15.010437 | orchestrator | 2025-09-13 00:44:15.010448 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-13 00:44:15.010459 | orchestrator | Saturday 13 September 2025 00:44:11 +0000 (0:00:00.176) 0:00:11.117 **** 2025-09-13 00:44:15.010470 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-741132e6-4e77-5ad5-aab1-a12c98657a1e', 'data_vg': 'ceph-741132e6-4e77-5ad5-aab1-a12c98657a1e'}) 2025-09-13 00:44:15.010481 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c9c3f5f4-a401-5886-82fa-33c7ca08590f', 'data_vg': 'ceph-c9c3f5f4-a401-5886-82fa-33c7ca08590f'}) 2025-09-13 00:44:15.010492 | orchestrator | 2025-09-13 00:44:15.010503 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-13 00:44:15.010514 | orchestrator | Saturday 13 September 2025 00:44:13 +0000 (0:00:01.465) 0:00:12.583 **** 2025-09-13 00:44:15.010525 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-741132e6-4e77-5ad5-aab1-a12c98657a1e', 'data_vg': 'ceph-741132e6-4e77-5ad5-aab1-a12c98657a1e'})  2025-09-13 00:44:15.010537 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c9c3f5f4-a401-5886-82fa-33c7ca08590f', 'data_vg': 'ceph-c9c3f5f4-a401-5886-82fa-33c7ca08590f'})  2025-09-13 00:44:15.010548 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:15.010559 | orchestrator | 2025-09-13 00:44:15.010570 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-13 00:44:15.010581 | orchestrator | Saturday 13 September 2025 00:44:13 +0000 (0:00:00.149) 0:00:12.732 **** 2025-09-13 00:44:15.010592 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:15.010603 | orchestrator | 2025-09-13 00:44:15.010614 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-13 00:44:15.010643 | orchestrator | Saturday 13 September 2025 00:44:13 +0000 (0:00:00.123) 0:00:12.856 **** 2025-09-13 00:44:15.010655 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-741132e6-4e77-5ad5-aab1-a12c98657a1e', 'data_vg': 'ceph-741132e6-4e77-5ad5-aab1-a12c98657a1e'})  2025-09-13 00:44:15.010666 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c9c3f5f4-a401-5886-82fa-33c7ca08590f', 'data_vg': 'ceph-c9c3f5f4-a401-5886-82fa-33c7ca08590f'})  2025-09-13 00:44:15.010677 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:15.010688 | orchestrator | 2025-09-13 00:44:15.010700 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-13 00:44:15.010710 | orchestrator | Saturday 13 September 2025 00:44:13 +0000 (0:00:00.240) 0:00:13.096 **** 2025-09-13 00:44:15.010721 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:15.010732 | orchestrator | 2025-09-13 00:44:15.010743 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-13 00:44:15.010754 | orchestrator | Saturday 13 September 2025 00:44:13 +0000 (0:00:00.130) 0:00:13.227 **** 2025-09-13 00:44:15.010765 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-741132e6-4e77-5ad5-aab1-a12c98657a1e', 'data_vg': 'ceph-741132e6-4e77-5ad5-aab1-a12c98657a1e'})  2025-09-13 00:44:15.010783 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c9c3f5f4-a401-5886-82fa-33c7ca08590f', 'data_vg': 'ceph-c9c3f5f4-a401-5886-82fa-33c7ca08590f'})  2025-09-13 00:44:15.010795 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:15.010806 | orchestrator | 2025-09-13 00:44:15.010817 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-13 00:44:15.010827 | orchestrator | Saturday 13 September 2025 00:44:13 +0000 (0:00:00.133) 0:00:13.361 **** 2025-09-13 00:44:15.010871 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:15.010883 | orchestrator | 2025-09-13 00:44:15.010894 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-13 00:44:15.010905 | orchestrator | Saturday 13 September 2025 00:44:14 +0000 (0:00:00.125) 0:00:13.486 **** 2025-09-13 00:44:15.010916 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-741132e6-4e77-5ad5-aab1-a12c98657a1e', 'data_vg': 'ceph-741132e6-4e77-5ad5-aab1-a12c98657a1e'})  2025-09-13 00:44:15.010927 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c9c3f5f4-a401-5886-82fa-33c7ca08590f', 'data_vg': 'ceph-c9c3f5f4-a401-5886-82fa-33c7ca08590f'})  2025-09-13 00:44:15.010938 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:15.010949 | orchestrator | 2025-09-13 00:44:15.010960 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-13 00:44:15.010971 | orchestrator | Saturday 13 September 2025 00:44:14 +0000 (0:00:00.120) 0:00:13.606 **** 2025-09-13 00:44:15.010983 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:44:15.010994 | orchestrator | 2025-09-13 00:44:15.011005 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-13 00:44:15.011016 | orchestrator | Saturday 13 September 2025 00:44:14 +0000 (0:00:00.125) 0:00:13.732 **** 2025-09-13 00:44:15.011032 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-741132e6-4e77-5ad5-aab1-a12c98657a1e', 'data_vg': 'ceph-741132e6-4e77-5ad5-aab1-a12c98657a1e'})  2025-09-13 00:44:15.011044 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c9c3f5f4-a401-5886-82fa-33c7ca08590f', 'data_vg': 'ceph-c9c3f5f4-a401-5886-82fa-33c7ca08590f'})  2025-09-13 00:44:15.011055 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:15.011066 | orchestrator | 2025-09-13 00:44:15.011077 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-13 00:44:15.011088 | orchestrator | Saturday 13 September 2025 00:44:14 +0000 (0:00:00.132) 0:00:13.865 **** 2025-09-13 00:44:15.011099 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-741132e6-4e77-5ad5-aab1-a12c98657a1e', 'data_vg': 'ceph-741132e6-4e77-5ad5-aab1-a12c98657a1e'})  2025-09-13 00:44:15.011110 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c9c3f5f4-a401-5886-82fa-33c7ca08590f', 'data_vg': 'ceph-c9c3f5f4-a401-5886-82fa-33c7ca08590f'})  2025-09-13 00:44:15.011121 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:15.011132 | orchestrator | 2025-09-13 00:44:15.011143 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-13 00:44:15.011154 | orchestrator | Saturday 13 September 2025 00:44:14 +0000 (0:00:00.125) 0:00:13.991 **** 2025-09-13 00:44:15.011165 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-741132e6-4e77-5ad5-aab1-a12c98657a1e', 'data_vg': 'ceph-741132e6-4e77-5ad5-aab1-a12c98657a1e'})  2025-09-13 00:44:15.011176 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c9c3f5f4-a401-5886-82fa-33c7ca08590f', 'data_vg': 'ceph-c9c3f5f4-a401-5886-82fa-33c7ca08590f'})  2025-09-13 00:44:15.011187 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:15.011198 | orchestrator | 2025-09-13 00:44:15.011209 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-13 00:44:15.011221 | orchestrator | Saturday 13 September 2025 00:44:14 +0000 (0:00:00.133) 0:00:14.125 **** 2025-09-13 00:44:15.011232 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:15.011251 | orchestrator | 2025-09-13 00:44:15.011262 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-13 00:44:15.011273 | orchestrator | Saturday 13 September 2025 00:44:14 +0000 (0:00:00.122) 0:00:14.247 **** 2025-09-13 00:44:15.011284 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:15.011295 | orchestrator | 2025-09-13 00:44:15.011312 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-13 00:44:20.928127 | orchestrator | Saturday 13 September 2025 00:44:15 +0000 (0:00:00.124) 0:00:14.372 **** 2025-09-13 00:44:20.928240 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:20.928257 | orchestrator | 2025-09-13 00:44:20.928269 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-13 00:44:20.928281 | orchestrator | Saturday 13 September 2025 00:44:15 +0000 (0:00:00.125) 0:00:14.498 **** 2025-09-13 00:44:20.928291 | orchestrator | ok: [testbed-node-3] => { 2025-09-13 00:44:20.928303 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-13 00:44:20.928314 | orchestrator | } 2025-09-13 00:44:20.928325 | orchestrator | 2025-09-13 00:44:20.928336 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-13 00:44:20.928346 | orchestrator | Saturday 13 September 2025 00:44:15 +0000 (0:00:00.259) 0:00:14.757 **** 2025-09-13 00:44:20.928358 | orchestrator | ok: [testbed-node-3] => { 2025-09-13 00:44:20.928368 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-13 00:44:20.928379 | orchestrator | } 2025-09-13 00:44:20.928390 | orchestrator | 2025-09-13 00:44:20.928401 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-13 00:44:20.928411 | orchestrator | Saturday 13 September 2025 00:44:15 +0000 (0:00:00.136) 0:00:14.894 **** 2025-09-13 00:44:20.928422 | orchestrator | ok: [testbed-node-3] => { 2025-09-13 00:44:20.928433 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-13 00:44:20.928444 | orchestrator | } 2025-09-13 00:44:20.928456 | orchestrator | 2025-09-13 00:44:20.928467 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-13 00:44:20.928478 | orchestrator | Saturday 13 September 2025 00:44:15 +0000 (0:00:00.107) 0:00:15.001 **** 2025-09-13 00:44:20.928489 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:44:20.928499 | orchestrator | 2025-09-13 00:44:20.928510 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-13 00:44:20.928521 | orchestrator | Saturday 13 September 2025 00:44:16 +0000 (0:00:00.603) 0:00:15.605 **** 2025-09-13 00:44:20.928532 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:44:20.928543 | orchestrator | 2025-09-13 00:44:20.928554 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-13 00:44:20.928564 | orchestrator | Saturday 13 September 2025 00:44:16 +0000 (0:00:00.504) 0:00:16.109 **** 2025-09-13 00:44:20.928575 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:44:20.928586 | orchestrator | 2025-09-13 00:44:20.928597 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-13 00:44:20.928608 | orchestrator | Saturday 13 September 2025 00:44:17 +0000 (0:00:00.474) 0:00:16.584 **** 2025-09-13 00:44:20.928618 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:44:20.928629 | orchestrator | 2025-09-13 00:44:20.928640 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-13 00:44:20.928651 | orchestrator | Saturday 13 September 2025 00:44:17 +0000 (0:00:00.139) 0:00:16.723 **** 2025-09-13 00:44:20.928663 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:20.928675 | orchestrator | 2025-09-13 00:44:20.928687 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-13 00:44:20.928699 | orchestrator | Saturday 13 September 2025 00:44:17 +0000 (0:00:00.118) 0:00:16.842 **** 2025-09-13 00:44:20.928711 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:20.928723 | orchestrator | 2025-09-13 00:44:20.928735 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-13 00:44:20.928747 | orchestrator | Saturday 13 September 2025 00:44:17 +0000 (0:00:00.096) 0:00:16.938 **** 2025-09-13 00:44:20.928759 | orchestrator | ok: [testbed-node-3] => { 2025-09-13 00:44:20.928798 | orchestrator |  "vgs_report": { 2025-09-13 00:44:20.928811 | orchestrator |  "vg": [] 2025-09-13 00:44:20.928824 | orchestrator |  } 2025-09-13 00:44:20.928869 | orchestrator | } 2025-09-13 00:44:20.928883 | orchestrator | 2025-09-13 00:44:20.928896 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-13 00:44:20.928908 | orchestrator | Saturday 13 September 2025 00:44:17 +0000 (0:00:00.132) 0:00:17.070 **** 2025-09-13 00:44:20.928920 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:20.928932 | orchestrator | 2025-09-13 00:44:20.928945 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-13 00:44:20.928956 | orchestrator | Saturday 13 September 2025 00:44:17 +0000 (0:00:00.135) 0:00:17.206 **** 2025-09-13 00:44:20.928969 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:20.928980 | orchestrator | 2025-09-13 00:44:20.928993 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-13 00:44:20.929006 | orchestrator | Saturday 13 September 2025 00:44:17 +0000 (0:00:00.137) 0:00:17.343 **** 2025-09-13 00:44:20.929017 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:20.929028 | orchestrator | 2025-09-13 00:44:20.929039 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-13 00:44:20.929050 | orchestrator | Saturday 13 September 2025 00:44:18 +0000 (0:00:00.249) 0:00:17.593 **** 2025-09-13 00:44:20.929060 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:20.929071 | orchestrator | 2025-09-13 00:44:20.929082 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-13 00:44:20.929093 | orchestrator | Saturday 13 September 2025 00:44:18 +0000 (0:00:00.130) 0:00:17.724 **** 2025-09-13 00:44:20.929103 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:20.929114 | orchestrator | 2025-09-13 00:44:20.929143 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-13 00:44:20.929154 | orchestrator | Saturday 13 September 2025 00:44:18 +0000 (0:00:00.123) 0:00:17.848 **** 2025-09-13 00:44:20.929165 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:20.929176 | orchestrator | 2025-09-13 00:44:20.929187 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-13 00:44:20.929198 | orchestrator | Saturday 13 September 2025 00:44:18 +0000 (0:00:00.117) 0:00:17.965 **** 2025-09-13 00:44:20.929209 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:20.929219 | orchestrator | 2025-09-13 00:44:20.929230 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-13 00:44:20.929241 | orchestrator | Saturday 13 September 2025 00:44:18 +0000 (0:00:00.127) 0:00:18.092 **** 2025-09-13 00:44:20.929252 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:20.929263 | orchestrator | 2025-09-13 00:44:20.929274 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-13 00:44:20.929304 | orchestrator | Saturday 13 September 2025 00:44:18 +0000 (0:00:00.120) 0:00:18.213 **** 2025-09-13 00:44:20.929315 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:20.929326 | orchestrator | 2025-09-13 00:44:20.929337 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-13 00:44:20.929348 | orchestrator | Saturday 13 September 2025 00:44:18 +0000 (0:00:00.127) 0:00:18.341 **** 2025-09-13 00:44:20.929359 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:20.929369 | orchestrator | 2025-09-13 00:44:20.929380 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-13 00:44:20.929391 | orchestrator | Saturday 13 September 2025 00:44:19 +0000 (0:00:00.118) 0:00:18.459 **** 2025-09-13 00:44:20.929402 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:20.929412 | orchestrator | 2025-09-13 00:44:20.929423 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-13 00:44:20.929433 | orchestrator | Saturday 13 September 2025 00:44:19 +0000 (0:00:00.144) 0:00:18.604 **** 2025-09-13 00:44:20.929444 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:20.929455 | orchestrator | 2025-09-13 00:44:20.929476 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-13 00:44:20.929487 | orchestrator | Saturday 13 September 2025 00:44:19 +0000 (0:00:00.142) 0:00:18.746 **** 2025-09-13 00:44:20.929498 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:20.929509 | orchestrator | 2025-09-13 00:44:20.929519 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-13 00:44:20.929530 | orchestrator | Saturday 13 September 2025 00:44:19 +0000 (0:00:00.144) 0:00:18.891 **** 2025-09-13 00:44:20.929541 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:20.929552 | orchestrator | 2025-09-13 00:44:20.929563 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-13 00:44:20.929573 | orchestrator | Saturday 13 September 2025 00:44:19 +0000 (0:00:00.116) 0:00:19.007 **** 2025-09-13 00:44:20.929586 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-741132e6-4e77-5ad5-aab1-a12c98657a1e', 'data_vg': 'ceph-741132e6-4e77-5ad5-aab1-a12c98657a1e'})  2025-09-13 00:44:20.929599 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c9c3f5f4-a401-5886-82fa-33c7ca08590f', 'data_vg': 'ceph-c9c3f5f4-a401-5886-82fa-33c7ca08590f'})  2025-09-13 00:44:20.929610 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:20.929621 | orchestrator | 2025-09-13 00:44:20.929632 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-13 00:44:20.929642 | orchestrator | Saturday 13 September 2025 00:44:20 +0000 (0:00:00.371) 0:00:19.379 **** 2025-09-13 00:44:20.929653 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-741132e6-4e77-5ad5-aab1-a12c98657a1e', 'data_vg': 'ceph-741132e6-4e77-5ad5-aab1-a12c98657a1e'})  2025-09-13 00:44:20.929664 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c9c3f5f4-a401-5886-82fa-33c7ca08590f', 'data_vg': 'ceph-c9c3f5f4-a401-5886-82fa-33c7ca08590f'})  2025-09-13 00:44:20.929676 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:20.929686 | orchestrator | 2025-09-13 00:44:20.929697 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-13 00:44:20.929707 | orchestrator | Saturday 13 September 2025 00:44:20 +0000 (0:00:00.165) 0:00:19.544 **** 2025-09-13 00:44:20.929723 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-741132e6-4e77-5ad5-aab1-a12c98657a1e', 'data_vg': 'ceph-741132e6-4e77-5ad5-aab1-a12c98657a1e'})  2025-09-13 00:44:20.929735 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c9c3f5f4-a401-5886-82fa-33c7ca08590f', 'data_vg': 'ceph-c9c3f5f4-a401-5886-82fa-33c7ca08590f'})  2025-09-13 00:44:20.929746 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:20.929756 | orchestrator | 2025-09-13 00:44:20.929767 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-13 00:44:20.929777 | orchestrator | Saturday 13 September 2025 00:44:20 +0000 (0:00:00.173) 0:00:19.717 **** 2025-09-13 00:44:20.929788 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-741132e6-4e77-5ad5-aab1-a12c98657a1e', 'data_vg': 'ceph-741132e6-4e77-5ad5-aab1-a12c98657a1e'})  2025-09-13 00:44:20.929799 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c9c3f5f4-a401-5886-82fa-33c7ca08590f', 'data_vg': 'ceph-c9c3f5f4-a401-5886-82fa-33c7ca08590f'})  2025-09-13 00:44:20.929810 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:20.929821 | orchestrator | 2025-09-13 00:44:20.929832 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-13 00:44:20.929859 | orchestrator | Saturday 13 September 2025 00:44:20 +0000 (0:00:00.185) 0:00:19.903 **** 2025-09-13 00:44:20.929870 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-741132e6-4e77-5ad5-aab1-a12c98657a1e', 'data_vg': 'ceph-741132e6-4e77-5ad5-aab1-a12c98657a1e'})  2025-09-13 00:44:20.929881 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c9c3f5f4-a401-5886-82fa-33c7ca08590f', 'data_vg': 'ceph-c9c3f5f4-a401-5886-82fa-33c7ca08590f'})  2025-09-13 00:44:20.929892 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:20.929910 | orchestrator | 2025-09-13 00:44:20.929921 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-13 00:44:20.929932 | orchestrator | Saturday 13 September 2025 00:44:20 +0000 (0:00:00.194) 0:00:20.097 **** 2025-09-13 00:44:20.929943 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-741132e6-4e77-5ad5-aab1-a12c98657a1e', 'data_vg': 'ceph-741132e6-4e77-5ad5-aab1-a12c98657a1e'})  2025-09-13 00:44:20.929961 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c9c3f5f4-a401-5886-82fa-33c7ca08590f', 'data_vg': 'ceph-c9c3f5f4-a401-5886-82fa-33c7ca08590f'})  2025-09-13 00:44:26.771566 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:26.771677 | orchestrator | 2025-09-13 00:44:26.771694 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-13 00:44:26.771708 | orchestrator | Saturday 13 September 2025 00:44:20 +0000 (0:00:00.193) 0:00:20.291 **** 2025-09-13 00:44:26.771720 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-741132e6-4e77-5ad5-aab1-a12c98657a1e', 'data_vg': 'ceph-741132e6-4e77-5ad5-aab1-a12c98657a1e'})  2025-09-13 00:44:26.771732 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c9c3f5f4-a401-5886-82fa-33c7ca08590f', 'data_vg': 'ceph-c9c3f5f4-a401-5886-82fa-33c7ca08590f'})  2025-09-13 00:44:26.771744 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:26.771755 | orchestrator | 2025-09-13 00:44:26.771767 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-13 00:44:26.771778 | orchestrator | Saturday 13 September 2025 00:44:21 +0000 (0:00:00.181) 0:00:20.472 **** 2025-09-13 00:44:26.771789 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-741132e6-4e77-5ad5-aab1-a12c98657a1e', 'data_vg': 'ceph-741132e6-4e77-5ad5-aab1-a12c98657a1e'})  2025-09-13 00:44:26.771800 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c9c3f5f4-a401-5886-82fa-33c7ca08590f', 'data_vg': 'ceph-c9c3f5f4-a401-5886-82fa-33c7ca08590f'})  2025-09-13 00:44:26.771811 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:26.771822 | orchestrator | 2025-09-13 00:44:26.771834 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-13 00:44:26.771871 | orchestrator | Saturday 13 September 2025 00:44:21 +0000 (0:00:00.224) 0:00:20.697 **** 2025-09-13 00:44:26.771882 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:44:26.771894 | orchestrator | 2025-09-13 00:44:26.771905 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-13 00:44:26.771916 | orchestrator | Saturday 13 September 2025 00:44:21 +0000 (0:00:00.508) 0:00:21.205 **** 2025-09-13 00:44:26.771926 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:44:26.771937 | orchestrator | 2025-09-13 00:44:26.771948 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-13 00:44:26.771959 | orchestrator | Saturday 13 September 2025 00:44:22 +0000 (0:00:00.557) 0:00:21.763 **** 2025-09-13 00:44:26.771970 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:44:26.771981 | orchestrator | 2025-09-13 00:44:26.771991 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-13 00:44:26.772002 | orchestrator | Saturday 13 September 2025 00:44:22 +0000 (0:00:00.169) 0:00:21.933 **** 2025-09-13 00:44:26.772013 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-741132e6-4e77-5ad5-aab1-a12c98657a1e', 'vg_name': 'ceph-741132e6-4e77-5ad5-aab1-a12c98657a1e'}) 2025-09-13 00:44:26.772025 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-c9c3f5f4-a401-5886-82fa-33c7ca08590f', 'vg_name': 'ceph-c9c3f5f4-a401-5886-82fa-33c7ca08590f'}) 2025-09-13 00:44:26.772036 | orchestrator | 2025-09-13 00:44:26.772047 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-13 00:44:26.772058 | orchestrator | Saturday 13 September 2025 00:44:22 +0000 (0:00:00.207) 0:00:22.140 **** 2025-09-13 00:44:26.772069 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-741132e6-4e77-5ad5-aab1-a12c98657a1e', 'data_vg': 'ceph-741132e6-4e77-5ad5-aab1-a12c98657a1e'})  2025-09-13 00:44:26.772103 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c9c3f5f4-a401-5886-82fa-33c7ca08590f', 'data_vg': 'ceph-c9c3f5f4-a401-5886-82fa-33c7ca08590f'})  2025-09-13 00:44:26.772117 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:26.772129 | orchestrator | 2025-09-13 00:44:26.772142 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-13 00:44:26.772154 | orchestrator | Saturday 13 September 2025 00:44:23 +0000 (0:00:00.393) 0:00:22.533 **** 2025-09-13 00:44:26.772167 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-741132e6-4e77-5ad5-aab1-a12c98657a1e', 'data_vg': 'ceph-741132e6-4e77-5ad5-aab1-a12c98657a1e'})  2025-09-13 00:44:26.772180 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c9c3f5f4-a401-5886-82fa-33c7ca08590f', 'data_vg': 'ceph-c9c3f5f4-a401-5886-82fa-33c7ca08590f'})  2025-09-13 00:44:26.772192 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:26.772204 | orchestrator | 2025-09-13 00:44:26.772216 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-13 00:44:26.772229 | orchestrator | Saturday 13 September 2025 00:44:23 +0000 (0:00:00.192) 0:00:22.726 **** 2025-09-13 00:44:26.772242 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-741132e6-4e77-5ad5-aab1-a12c98657a1e', 'data_vg': 'ceph-741132e6-4e77-5ad5-aab1-a12c98657a1e'})  2025-09-13 00:44:26.772255 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c9c3f5f4-a401-5886-82fa-33c7ca08590f', 'data_vg': 'ceph-c9c3f5f4-a401-5886-82fa-33c7ca08590f'})  2025-09-13 00:44:26.772266 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:44:26.772276 | orchestrator | 2025-09-13 00:44:26.772287 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-13 00:44:26.772298 | orchestrator | Saturday 13 September 2025 00:44:23 +0000 (0:00:00.154) 0:00:22.880 **** 2025-09-13 00:44:26.772309 | orchestrator | ok: [testbed-node-3] => { 2025-09-13 00:44:26.772320 | orchestrator |  "lvm_report": { 2025-09-13 00:44:26.772331 | orchestrator |  "lv": [ 2025-09-13 00:44:26.772342 | orchestrator |  { 2025-09-13 00:44:26.772372 | orchestrator |  "lv_name": "osd-block-741132e6-4e77-5ad5-aab1-a12c98657a1e", 2025-09-13 00:44:26.772384 | orchestrator |  "vg_name": "ceph-741132e6-4e77-5ad5-aab1-a12c98657a1e" 2025-09-13 00:44:26.772395 | orchestrator |  }, 2025-09-13 00:44:26.772405 | orchestrator |  { 2025-09-13 00:44:26.772416 | orchestrator |  "lv_name": "osd-block-c9c3f5f4-a401-5886-82fa-33c7ca08590f", 2025-09-13 00:44:26.772427 | orchestrator |  "vg_name": "ceph-c9c3f5f4-a401-5886-82fa-33c7ca08590f" 2025-09-13 00:44:26.772438 | orchestrator |  } 2025-09-13 00:44:26.772449 | orchestrator |  ], 2025-09-13 00:44:26.772460 | orchestrator |  "pv": [ 2025-09-13 00:44:26.772471 | orchestrator |  { 2025-09-13 00:44:26.772481 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-13 00:44:26.772492 | orchestrator |  "vg_name": "ceph-741132e6-4e77-5ad5-aab1-a12c98657a1e" 2025-09-13 00:44:26.772503 | orchestrator |  }, 2025-09-13 00:44:26.772514 | orchestrator |  { 2025-09-13 00:44:26.772525 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-13 00:44:26.772536 | orchestrator |  "vg_name": "ceph-c9c3f5f4-a401-5886-82fa-33c7ca08590f" 2025-09-13 00:44:26.772547 | orchestrator |  } 2025-09-13 00:44:26.772558 | orchestrator |  ] 2025-09-13 00:44:26.772568 | orchestrator |  } 2025-09-13 00:44:26.772579 | orchestrator | } 2025-09-13 00:44:26.772590 | orchestrator | 2025-09-13 00:44:26.772601 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-13 00:44:26.772612 | orchestrator | 2025-09-13 00:44:26.772623 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-13 00:44:26.772634 | orchestrator | Saturday 13 September 2025 00:44:23 +0000 (0:00:00.376) 0:00:23.257 **** 2025-09-13 00:44:26.772645 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-13 00:44:26.772665 | orchestrator | 2025-09-13 00:44:26.772676 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-13 00:44:26.772687 | orchestrator | Saturday 13 September 2025 00:44:24 +0000 (0:00:00.360) 0:00:23.617 **** 2025-09-13 00:44:26.772698 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:44:26.772708 | orchestrator | 2025-09-13 00:44:26.772719 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:26.772730 | orchestrator | Saturday 13 September 2025 00:44:24 +0000 (0:00:00.267) 0:00:23.885 **** 2025-09-13 00:44:26.772761 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-13 00:44:26.772772 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-13 00:44:26.772782 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-13 00:44:26.772793 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-13 00:44:26.772804 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-13 00:44:26.772815 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-13 00:44:26.772826 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-13 00:44:26.772866 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-13 00:44:26.772878 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-13 00:44:26.772889 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-13 00:44:26.772899 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-13 00:44:26.772910 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-13 00:44:26.772921 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-13 00:44:26.772932 | orchestrator | 2025-09-13 00:44:26.772942 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:26.772953 | orchestrator | Saturday 13 September 2025 00:44:25 +0000 (0:00:00.527) 0:00:24.412 **** 2025-09-13 00:44:26.772964 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:26.772975 | orchestrator | 2025-09-13 00:44:26.772985 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:26.772996 | orchestrator | Saturday 13 September 2025 00:44:25 +0000 (0:00:00.218) 0:00:24.631 **** 2025-09-13 00:44:26.773007 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:26.773018 | orchestrator | 2025-09-13 00:44:26.773029 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:26.773039 | orchestrator | Saturday 13 September 2025 00:44:25 +0000 (0:00:00.207) 0:00:24.838 **** 2025-09-13 00:44:26.773050 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:26.773061 | orchestrator | 2025-09-13 00:44:26.773071 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:26.773082 | orchestrator | Saturday 13 September 2025 00:44:25 +0000 (0:00:00.488) 0:00:25.326 **** 2025-09-13 00:44:26.773092 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:26.773103 | orchestrator | 2025-09-13 00:44:26.773114 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:26.773125 | orchestrator | Saturday 13 September 2025 00:44:26 +0000 (0:00:00.190) 0:00:25.516 **** 2025-09-13 00:44:26.773135 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:26.773146 | orchestrator | 2025-09-13 00:44:26.773157 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:26.773167 | orchestrator | Saturday 13 September 2025 00:44:26 +0000 (0:00:00.189) 0:00:25.706 **** 2025-09-13 00:44:26.773178 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:26.773189 | orchestrator | 2025-09-13 00:44:26.773207 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:26.773218 | orchestrator | Saturday 13 September 2025 00:44:26 +0000 (0:00:00.177) 0:00:25.883 **** 2025-09-13 00:44:26.773229 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:26.773240 | orchestrator | 2025-09-13 00:44:26.773258 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:37.176726 | orchestrator | Saturday 13 September 2025 00:44:26 +0000 (0:00:00.250) 0:00:26.134 **** 2025-09-13 00:44:37.176830 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:37.176846 | orchestrator | 2025-09-13 00:44:37.176907 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:37.176920 | orchestrator | Saturday 13 September 2025 00:44:26 +0000 (0:00:00.215) 0:00:26.349 **** 2025-09-13 00:44:37.176931 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8) 2025-09-13 00:44:37.176943 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8) 2025-09-13 00:44:37.176955 | orchestrator | 2025-09-13 00:44:37.176966 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:37.176977 | orchestrator | Saturday 13 September 2025 00:44:27 +0000 (0:00:00.401) 0:00:26.751 **** 2025-09-13 00:44:37.176988 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e924364d-2e91-46ce-bd4b-cca5d229d1e6) 2025-09-13 00:44:37.176999 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e924364d-2e91-46ce-bd4b-cca5d229d1e6) 2025-09-13 00:44:37.177010 | orchestrator | 2025-09-13 00:44:37.177021 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:37.177032 | orchestrator | Saturday 13 September 2025 00:44:27 +0000 (0:00:00.428) 0:00:27.180 **** 2025-09-13 00:44:37.177043 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f868cbab-65ba-4325-b003-03d97073cddb) 2025-09-13 00:44:37.177054 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f868cbab-65ba-4325-b003-03d97073cddb) 2025-09-13 00:44:37.177065 | orchestrator | 2025-09-13 00:44:37.177075 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:37.177087 | orchestrator | Saturday 13 September 2025 00:44:28 +0000 (0:00:00.481) 0:00:27.661 **** 2025-09-13 00:44:37.177098 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5a3f219a-02e3-456c-9d7f-0c5a8049cd2b) 2025-09-13 00:44:37.177109 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5a3f219a-02e3-456c-9d7f-0c5a8049cd2b) 2025-09-13 00:44:37.177120 | orchestrator | 2025-09-13 00:44:37.177131 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:37.177142 | orchestrator | Saturday 13 September 2025 00:44:28 +0000 (0:00:00.450) 0:00:28.112 **** 2025-09-13 00:44:37.177153 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-13 00:44:37.177164 | orchestrator | 2025-09-13 00:44:37.177175 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:44:37.177187 | orchestrator | Saturday 13 September 2025 00:44:29 +0000 (0:00:00.376) 0:00:28.488 **** 2025-09-13 00:44:37.177197 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-13 00:44:37.177227 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-13 00:44:37.177241 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-13 00:44:37.177253 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-13 00:44:37.177265 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-13 00:44:37.177277 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-13 00:44:37.177289 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-13 00:44:37.177325 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-13 00:44:37.177337 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-13 00:44:37.177349 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-13 00:44:37.177361 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-13 00:44:37.177373 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-13 00:44:37.177387 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-13 00:44:37.177399 | orchestrator | 2025-09-13 00:44:37.177411 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:44:37.177423 | orchestrator | Saturday 13 September 2025 00:44:29 +0000 (0:00:00.598) 0:00:29.087 **** 2025-09-13 00:44:37.177436 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:37.177448 | orchestrator | 2025-09-13 00:44:37.177460 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:44:37.177472 | orchestrator | Saturday 13 September 2025 00:44:29 +0000 (0:00:00.205) 0:00:29.292 **** 2025-09-13 00:44:37.177484 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:37.177497 | orchestrator | 2025-09-13 00:44:37.177510 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:44:37.177521 | orchestrator | Saturday 13 September 2025 00:44:30 +0000 (0:00:00.217) 0:00:29.510 **** 2025-09-13 00:44:37.177534 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:37.177546 | orchestrator | 2025-09-13 00:44:37.177559 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:44:37.177571 | orchestrator | Saturday 13 September 2025 00:44:30 +0000 (0:00:00.205) 0:00:29.715 **** 2025-09-13 00:44:37.177584 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:37.177597 | orchestrator | 2025-09-13 00:44:37.177627 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:44:37.177639 | orchestrator | Saturday 13 September 2025 00:44:30 +0000 (0:00:00.195) 0:00:29.910 **** 2025-09-13 00:44:37.177650 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:37.177661 | orchestrator | 2025-09-13 00:44:37.177672 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:44:37.177683 | orchestrator | Saturday 13 September 2025 00:44:30 +0000 (0:00:00.204) 0:00:30.114 **** 2025-09-13 00:44:37.177693 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:37.177704 | orchestrator | 2025-09-13 00:44:37.177715 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:44:37.177726 | orchestrator | Saturday 13 September 2025 00:44:30 +0000 (0:00:00.211) 0:00:30.326 **** 2025-09-13 00:44:37.177737 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:37.177748 | orchestrator | 2025-09-13 00:44:37.177759 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:44:37.177770 | orchestrator | Saturday 13 September 2025 00:44:31 +0000 (0:00:00.245) 0:00:30.571 **** 2025-09-13 00:44:37.177781 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:37.177791 | orchestrator | 2025-09-13 00:44:37.177802 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:44:37.177813 | orchestrator | Saturday 13 September 2025 00:44:31 +0000 (0:00:00.204) 0:00:30.776 **** 2025-09-13 00:44:37.177824 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-13 00:44:37.177835 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-13 00:44:37.177846 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-13 00:44:37.177880 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-13 00:44:37.177891 | orchestrator | 2025-09-13 00:44:37.177903 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:44:37.177914 | orchestrator | Saturday 13 September 2025 00:44:32 +0000 (0:00:00.830) 0:00:31.607 **** 2025-09-13 00:44:37.177933 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:37.177945 | orchestrator | 2025-09-13 00:44:37.177955 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:44:37.177966 | orchestrator | Saturday 13 September 2025 00:44:32 +0000 (0:00:00.204) 0:00:31.812 **** 2025-09-13 00:44:37.177977 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:37.177988 | orchestrator | 2025-09-13 00:44:37.177999 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:44:37.178010 | orchestrator | Saturday 13 September 2025 00:44:32 +0000 (0:00:00.194) 0:00:32.006 **** 2025-09-13 00:44:37.178076 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:37.178087 | orchestrator | 2025-09-13 00:44:37.178098 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:44:37.178109 | orchestrator | Saturday 13 September 2025 00:44:33 +0000 (0:00:00.627) 0:00:32.633 **** 2025-09-13 00:44:37.178120 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:37.178131 | orchestrator | 2025-09-13 00:44:37.178142 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-13 00:44:37.178153 | orchestrator | Saturday 13 September 2025 00:44:33 +0000 (0:00:00.205) 0:00:32.839 **** 2025-09-13 00:44:37.178165 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:37.178175 | orchestrator | 2025-09-13 00:44:37.178186 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-13 00:44:37.178197 | orchestrator | Saturday 13 September 2025 00:44:33 +0000 (0:00:00.144) 0:00:32.983 **** 2025-09-13 00:44:37.178208 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b9d4bd55-4398-5073-b181-64dcd216e500'}}) 2025-09-13 00:44:37.178220 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b087737a-96b5-5170-ab1c-c312068a0bca'}}) 2025-09-13 00:44:37.178231 | orchestrator | 2025-09-13 00:44:37.178242 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-13 00:44:37.178253 | orchestrator | Saturday 13 September 2025 00:44:33 +0000 (0:00:00.194) 0:00:33.177 **** 2025-09-13 00:44:37.178265 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b9d4bd55-4398-5073-b181-64dcd216e500', 'data_vg': 'ceph-b9d4bd55-4398-5073-b181-64dcd216e500'}) 2025-09-13 00:44:37.178277 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b087737a-96b5-5170-ab1c-c312068a0bca', 'data_vg': 'ceph-b087737a-96b5-5170-ab1c-c312068a0bca'}) 2025-09-13 00:44:37.178288 | orchestrator | 2025-09-13 00:44:37.178299 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-13 00:44:37.178310 | orchestrator | Saturday 13 September 2025 00:44:35 +0000 (0:00:01.902) 0:00:35.080 **** 2025-09-13 00:44:37.178321 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9d4bd55-4398-5073-b181-64dcd216e500', 'data_vg': 'ceph-b9d4bd55-4398-5073-b181-64dcd216e500'})  2025-09-13 00:44:37.178333 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b087737a-96b5-5170-ab1c-c312068a0bca', 'data_vg': 'ceph-b087737a-96b5-5170-ab1c-c312068a0bca'})  2025-09-13 00:44:37.178344 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:37.178355 | orchestrator | 2025-09-13 00:44:37.178366 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-13 00:44:37.178377 | orchestrator | Saturday 13 September 2025 00:44:35 +0000 (0:00:00.139) 0:00:35.219 **** 2025-09-13 00:44:37.178388 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b9d4bd55-4398-5073-b181-64dcd216e500', 'data_vg': 'ceph-b9d4bd55-4398-5073-b181-64dcd216e500'}) 2025-09-13 00:44:37.178399 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b087737a-96b5-5170-ab1c-c312068a0bca', 'data_vg': 'ceph-b087737a-96b5-5170-ab1c-c312068a0bca'}) 2025-09-13 00:44:37.178410 | orchestrator | 2025-09-13 00:44:37.178429 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-13 00:44:43.098215 | orchestrator | Saturday 13 September 2025 00:44:37 +0000 (0:00:01.316) 0:00:36.536 **** 2025-09-13 00:44:43.098349 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9d4bd55-4398-5073-b181-64dcd216e500', 'data_vg': 'ceph-b9d4bd55-4398-5073-b181-64dcd216e500'})  2025-09-13 00:44:43.098366 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b087737a-96b5-5170-ab1c-c312068a0bca', 'data_vg': 'ceph-b087737a-96b5-5170-ab1c-c312068a0bca'})  2025-09-13 00:44:43.098379 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:43.098391 | orchestrator | 2025-09-13 00:44:43.098403 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-13 00:44:43.098414 | orchestrator | Saturday 13 September 2025 00:44:37 +0000 (0:00:00.175) 0:00:36.712 **** 2025-09-13 00:44:43.098425 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:43.098436 | orchestrator | 2025-09-13 00:44:43.098447 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-13 00:44:43.098459 | orchestrator | Saturday 13 September 2025 00:44:37 +0000 (0:00:00.157) 0:00:36.870 **** 2025-09-13 00:44:43.098470 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9d4bd55-4398-5073-b181-64dcd216e500', 'data_vg': 'ceph-b9d4bd55-4398-5073-b181-64dcd216e500'})  2025-09-13 00:44:43.098499 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b087737a-96b5-5170-ab1c-c312068a0bca', 'data_vg': 'ceph-b087737a-96b5-5170-ab1c-c312068a0bca'})  2025-09-13 00:44:43.098510 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:43.098521 | orchestrator | 2025-09-13 00:44:43.098532 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-13 00:44:43.098543 | orchestrator | Saturday 13 September 2025 00:44:37 +0000 (0:00:00.181) 0:00:37.051 **** 2025-09-13 00:44:43.098554 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:43.098565 | orchestrator | 2025-09-13 00:44:43.098576 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-13 00:44:43.098586 | orchestrator | Saturday 13 September 2025 00:44:37 +0000 (0:00:00.151) 0:00:37.202 **** 2025-09-13 00:44:43.098597 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9d4bd55-4398-5073-b181-64dcd216e500', 'data_vg': 'ceph-b9d4bd55-4398-5073-b181-64dcd216e500'})  2025-09-13 00:44:43.098608 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b087737a-96b5-5170-ab1c-c312068a0bca', 'data_vg': 'ceph-b087737a-96b5-5170-ab1c-c312068a0bca'})  2025-09-13 00:44:43.098619 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:43.098630 | orchestrator | 2025-09-13 00:44:43.098641 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-13 00:44:43.098652 | orchestrator | Saturday 13 September 2025 00:44:38 +0000 (0:00:00.183) 0:00:37.386 **** 2025-09-13 00:44:43.098671 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:43.098682 | orchestrator | 2025-09-13 00:44:43.098694 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-13 00:44:43.098705 | orchestrator | Saturday 13 September 2025 00:44:38 +0000 (0:00:00.467) 0:00:37.853 **** 2025-09-13 00:44:43.098715 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9d4bd55-4398-5073-b181-64dcd216e500', 'data_vg': 'ceph-b9d4bd55-4398-5073-b181-64dcd216e500'})  2025-09-13 00:44:43.098726 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b087737a-96b5-5170-ab1c-c312068a0bca', 'data_vg': 'ceph-b087737a-96b5-5170-ab1c-c312068a0bca'})  2025-09-13 00:44:43.098738 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:43.098751 | orchestrator | 2025-09-13 00:44:43.098764 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-13 00:44:43.098777 | orchestrator | Saturday 13 September 2025 00:44:38 +0000 (0:00:00.193) 0:00:38.047 **** 2025-09-13 00:44:43.098790 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:44:43.098804 | orchestrator | 2025-09-13 00:44:43.098816 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-13 00:44:43.098829 | orchestrator | Saturday 13 September 2025 00:44:38 +0000 (0:00:00.127) 0:00:38.174 **** 2025-09-13 00:44:43.098849 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9d4bd55-4398-5073-b181-64dcd216e500', 'data_vg': 'ceph-b9d4bd55-4398-5073-b181-64dcd216e500'})  2025-09-13 00:44:43.098887 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b087737a-96b5-5170-ab1c-c312068a0bca', 'data_vg': 'ceph-b087737a-96b5-5170-ab1c-c312068a0bca'})  2025-09-13 00:44:43.098900 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:43.098913 | orchestrator | 2025-09-13 00:44:43.098925 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-13 00:44:43.098938 | orchestrator | Saturday 13 September 2025 00:44:39 +0000 (0:00:00.221) 0:00:38.396 **** 2025-09-13 00:44:43.098951 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9d4bd55-4398-5073-b181-64dcd216e500', 'data_vg': 'ceph-b9d4bd55-4398-5073-b181-64dcd216e500'})  2025-09-13 00:44:43.098963 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b087737a-96b5-5170-ab1c-c312068a0bca', 'data_vg': 'ceph-b087737a-96b5-5170-ab1c-c312068a0bca'})  2025-09-13 00:44:43.098976 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:43.098989 | orchestrator | 2025-09-13 00:44:43.099000 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-13 00:44:43.099011 | orchestrator | Saturday 13 September 2025 00:44:39 +0000 (0:00:00.165) 0:00:38.561 **** 2025-09-13 00:44:43.099041 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9d4bd55-4398-5073-b181-64dcd216e500', 'data_vg': 'ceph-b9d4bd55-4398-5073-b181-64dcd216e500'})  2025-09-13 00:44:43.099053 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b087737a-96b5-5170-ab1c-c312068a0bca', 'data_vg': 'ceph-b087737a-96b5-5170-ab1c-c312068a0bca'})  2025-09-13 00:44:43.099064 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:43.099075 | orchestrator | 2025-09-13 00:44:43.099086 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-13 00:44:43.099097 | orchestrator | Saturday 13 September 2025 00:44:39 +0000 (0:00:00.207) 0:00:38.769 **** 2025-09-13 00:44:43.099108 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:43.099119 | orchestrator | 2025-09-13 00:44:43.099130 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-13 00:44:43.099140 | orchestrator | Saturday 13 September 2025 00:44:39 +0000 (0:00:00.147) 0:00:38.916 **** 2025-09-13 00:44:43.099151 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:43.099162 | orchestrator | 2025-09-13 00:44:43.099173 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-13 00:44:43.099184 | orchestrator | Saturday 13 September 2025 00:44:39 +0000 (0:00:00.145) 0:00:39.062 **** 2025-09-13 00:44:43.099194 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:43.099205 | orchestrator | 2025-09-13 00:44:43.099216 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-13 00:44:43.099227 | orchestrator | Saturday 13 September 2025 00:44:39 +0000 (0:00:00.168) 0:00:39.230 **** 2025-09-13 00:44:43.099238 | orchestrator | ok: [testbed-node-4] => { 2025-09-13 00:44:43.099249 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-13 00:44:43.099260 | orchestrator | } 2025-09-13 00:44:43.099271 | orchestrator | 2025-09-13 00:44:43.099283 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-13 00:44:43.099294 | orchestrator | Saturday 13 September 2025 00:44:40 +0000 (0:00:00.142) 0:00:39.373 **** 2025-09-13 00:44:43.099304 | orchestrator | ok: [testbed-node-4] => { 2025-09-13 00:44:43.099315 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-13 00:44:43.099326 | orchestrator | } 2025-09-13 00:44:43.099337 | orchestrator | 2025-09-13 00:44:43.099348 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-13 00:44:43.099359 | orchestrator | Saturday 13 September 2025 00:44:40 +0000 (0:00:00.161) 0:00:39.534 **** 2025-09-13 00:44:43.099369 | orchestrator | ok: [testbed-node-4] => { 2025-09-13 00:44:43.099380 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-13 00:44:43.099399 | orchestrator | } 2025-09-13 00:44:43.099410 | orchestrator | 2025-09-13 00:44:43.099421 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-13 00:44:43.099432 | orchestrator | Saturday 13 September 2025 00:44:40 +0000 (0:00:00.147) 0:00:39.682 **** 2025-09-13 00:44:43.099443 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:44:43.099454 | orchestrator | 2025-09-13 00:44:43.099465 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-13 00:44:43.099476 | orchestrator | Saturday 13 September 2025 00:44:41 +0000 (0:00:00.725) 0:00:40.407 **** 2025-09-13 00:44:43.099492 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:44:43.099503 | orchestrator | 2025-09-13 00:44:43.099514 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-13 00:44:43.099525 | orchestrator | Saturday 13 September 2025 00:44:41 +0000 (0:00:00.486) 0:00:40.894 **** 2025-09-13 00:44:43.099536 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:44:43.099548 | orchestrator | 2025-09-13 00:44:43.099559 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-13 00:44:43.099569 | orchestrator | Saturday 13 September 2025 00:44:42 +0000 (0:00:00.507) 0:00:41.402 **** 2025-09-13 00:44:43.099580 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:44:43.099591 | orchestrator | 2025-09-13 00:44:43.099602 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-13 00:44:43.099613 | orchestrator | Saturday 13 September 2025 00:44:42 +0000 (0:00:00.141) 0:00:41.543 **** 2025-09-13 00:44:43.099624 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:43.099634 | orchestrator | 2025-09-13 00:44:43.099645 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-13 00:44:43.099656 | orchestrator | Saturday 13 September 2025 00:44:42 +0000 (0:00:00.107) 0:00:41.651 **** 2025-09-13 00:44:43.099667 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:43.099678 | orchestrator | 2025-09-13 00:44:43.099689 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-13 00:44:43.099700 | orchestrator | Saturday 13 September 2025 00:44:42 +0000 (0:00:00.107) 0:00:41.758 **** 2025-09-13 00:44:43.099711 | orchestrator | ok: [testbed-node-4] => { 2025-09-13 00:44:43.099722 | orchestrator |  "vgs_report": { 2025-09-13 00:44:43.099733 | orchestrator |  "vg": [] 2025-09-13 00:44:43.099744 | orchestrator |  } 2025-09-13 00:44:43.099755 | orchestrator | } 2025-09-13 00:44:43.099766 | orchestrator | 2025-09-13 00:44:43.099777 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-13 00:44:43.099788 | orchestrator | Saturday 13 September 2025 00:44:42 +0000 (0:00:00.147) 0:00:41.905 **** 2025-09-13 00:44:43.099799 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:43.099809 | orchestrator | 2025-09-13 00:44:43.099820 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-13 00:44:43.099831 | orchestrator | Saturday 13 September 2025 00:44:42 +0000 (0:00:00.138) 0:00:42.044 **** 2025-09-13 00:44:43.099842 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:43.099853 | orchestrator | 2025-09-13 00:44:43.099883 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-13 00:44:43.099894 | orchestrator | Saturday 13 September 2025 00:44:42 +0000 (0:00:00.139) 0:00:42.184 **** 2025-09-13 00:44:43.099905 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:43.099916 | orchestrator | 2025-09-13 00:44:43.099927 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-13 00:44:43.099938 | orchestrator | Saturday 13 September 2025 00:44:42 +0000 (0:00:00.138) 0:00:42.322 **** 2025-09-13 00:44:43.099949 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:43.099960 | orchestrator | 2025-09-13 00:44:43.099971 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-13 00:44:43.099989 | orchestrator | Saturday 13 September 2025 00:44:43 +0000 (0:00:00.136) 0:00:42.459 **** 2025-09-13 00:44:47.826264 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:47.826374 | orchestrator | 2025-09-13 00:44:47.826415 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-13 00:44:47.826428 | orchestrator | Saturday 13 September 2025 00:44:43 +0000 (0:00:00.170) 0:00:42.630 **** 2025-09-13 00:44:47.826439 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:47.826450 | orchestrator | 2025-09-13 00:44:47.826462 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-13 00:44:47.826473 | orchestrator | Saturday 13 September 2025 00:44:43 +0000 (0:00:00.329) 0:00:42.959 **** 2025-09-13 00:44:47.826484 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:47.826495 | orchestrator | 2025-09-13 00:44:47.826506 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-13 00:44:47.826516 | orchestrator | Saturday 13 September 2025 00:44:43 +0000 (0:00:00.141) 0:00:43.101 **** 2025-09-13 00:44:47.826527 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:47.826538 | orchestrator | 2025-09-13 00:44:47.826549 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-13 00:44:47.826560 | orchestrator | Saturday 13 September 2025 00:44:43 +0000 (0:00:00.141) 0:00:43.242 **** 2025-09-13 00:44:47.826571 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:47.826582 | orchestrator | 2025-09-13 00:44:47.826593 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-13 00:44:47.826604 | orchestrator | Saturday 13 September 2025 00:44:44 +0000 (0:00:00.139) 0:00:43.382 **** 2025-09-13 00:44:47.826614 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:47.826625 | orchestrator | 2025-09-13 00:44:47.826636 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-13 00:44:47.826647 | orchestrator | Saturday 13 September 2025 00:44:44 +0000 (0:00:00.141) 0:00:43.523 **** 2025-09-13 00:44:47.826658 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:47.826669 | orchestrator | 2025-09-13 00:44:47.826680 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-13 00:44:47.826691 | orchestrator | Saturday 13 September 2025 00:44:44 +0000 (0:00:00.138) 0:00:43.662 **** 2025-09-13 00:44:47.826701 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:47.826712 | orchestrator | 2025-09-13 00:44:47.826723 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-13 00:44:47.826734 | orchestrator | Saturday 13 September 2025 00:44:44 +0000 (0:00:00.147) 0:00:43.810 **** 2025-09-13 00:44:47.826745 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:47.826756 | orchestrator | 2025-09-13 00:44:47.826766 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-13 00:44:47.826777 | orchestrator | Saturday 13 September 2025 00:44:44 +0000 (0:00:00.146) 0:00:43.956 **** 2025-09-13 00:44:47.826788 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:47.826801 | orchestrator | 2025-09-13 00:44:47.826814 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-13 00:44:47.826826 | orchestrator | Saturday 13 September 2025 00:44:44 +0000 (0:00:00.138) 0:00:44.095 **** 2025-09-13 00:44:47.826855 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9d4bd55-4398-5073-b181-64dcd216e500', 'data_vg': 'ceph-b9d4bd55-4398-5073-b181-64dcd216e500'})  2025-09-13 00:44:47.826896 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b087737a-96b5-5170-ab1c-c312068a0bca', 'data_vg': 'ceph-b087737a-96b5-5170-ab1c-c312068a0bca'})  2025-09-13 00:44:47.826910 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:47.826922 | orchestrator | 2025-09-13 00:44:47.826934 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-13 00:44:47.826947 | orchestrator | Saturday 13 September 2025 00:44:44 +0000 (0:00:00.160) 0:00:44.256 **** 2025-09-13 00:44:47.826959 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9d4bd55-4398-5073-b181-64dcd216e500', 'data_vg': 'ceph-b9d4bd55-4398-5073-b181-64dcd216e500'})  2025-09-13 00:44:47.826971 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b087737a-96b5-5170-ab1c-c312068a0bca', 'data_vg': 'ceph-b087737a-96b5-5170-ab1c-c312068a0bca'})  2025-09-13 00:44:47.826993 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:47.827006 | orchestrator | 2025-09-13 00:44:47.827018 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-13 00:44:47.827030 | orchestrator | Saturday 13 September 2025 00:44:45 +0000 (0:00:00.147) 0:00:44.404 **** 2025-09-13 00:44:47.827043 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9d4bd55-4398-5073-b181-64dcd216e500', 'data_vg': 'ceph-b9d4bd55-4398-5073-b181-64dcd216e500'})  2025-09-13 00:44:47.827056 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b087737a-96b5-5170-ab1c-c312068a0bca', 'data_vg': 'ceph-b087737a-96b5-5170-ab1c-c312068a0bca'})  2025-09-13 00:44:47.827068 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:47.827080 | orchestrator | 2025-09-13 00:44:47.827092 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-13 00:44:47.827104 | orchestrator | Saturday 13 September 2025 00:44:45 +0000 (0:00:00.185) 0:00:44.589 **** 2025-09-13 00:44:47.827116 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9d4bd55-4398-5073-b181-64dcd216e500', 'data_vg': 'ceph-b9d4bd55-4398-5073-b181-64dcd216e500'})  2025-09-13 00:44:47.827129 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b087737a-96b5-5170-ab1c-c312068a0bca', 'data_vg': 'ceph-b087737a-96b5-5170-ab1c-c312068a0bca'})  2025-09-13 00:44:47.827141 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:47.827154 | orchestrator | 2025-09-13 00:44:47.827164 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-13 00:44:47.827192 | orchestrator | Saturday 13 September 2025 00:44:45 +0000 (0:00:00.349) 0:00:44.939 **** 2025-09-13 00:44:47.827204 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9d4bd55-4398-5073-b181-64dcd216e500', 'data_vg': 'ceph-b9d4bd55-4398-5073-b181-64dcd216e500'})  2025-09-13 00:44:47.827215 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b087737a-96b5-5170-ab1c-c312068a0bca', 'data_vg': 'ceph-b087737a-96b5-5170-ab1c-c312068a0bca'})  2025-09-13 00:44:47.827226 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:47.827237 | orchestrator | 2025-09-13 00:44:47.827248 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-13 00:44:47.827259 | orchestrator | Saturday 13 September 2025 00:44:45 +0000 (0:00:00.154) 0:00:45.094 **** 2025-09-13 00:44:47.827269 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9d4bd55-4398-5073-b181-64dcd216e500', 'data_vg': 'ceph-b9d4bd55-4398-5073-b181-64dcd216e500'})  2025-09-13 00:44:47.827280 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b087737a-96b5-5170-ab1c-c312068a0bca', 'data_vg': 'ceph-b087737a-96b5-5170-ab1c-c312068a0bca'})  2025-09-13 00:44:47.827291 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:47.827303 | orchestrator | 2025-09-13 00:44:47.827313 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-13 00:44:47.827325 | orchestrator | Saturday 13 September 2025 00:44:45 +0000 (0:00:00.151) 0:00:45.246 **** 2025-09-13 00:44:47.827336 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9d4bd55-4398-5073-b181-64dcd216e500', 'data_vg': 'ceph-b9d4bd55-4398-5073-b181-64dcd216e500'})  2025-09-13 00:44:47.827347 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b087737a-96b5-5170-ab1c-c312068a0bca', 'data_vg': 'ceph-b087737a-96b5-5170-ab1c-c312068a0bca'})  2025-09-13 00:44:47.827358 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:47.827368 | orchestrator | 2025-09-13 00:44:47.827379 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-13 00:44:47.827390 | orchestrator | Saturday 13 September 2025 00:44:46 +0000 (0:00:00.150) 0:00:45.397 **** 2025-09-13 00:44:47.827401 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9d4bd55-4398-5073-b181-64dcd216e500', 'data_vg': 'ceph-b9d4bd55-4398-5073-b181-64dcd216e500'})  2025-09-13 00:44:47.827418 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b087737a-96b5-5170-ab1c-c312068a0bca', 'data_vg': 'ceph-b087737a-96b5-5170-ab1c-c312068a0bca'})  2025-09-13 00:44:47.827429 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:47.827440 | orchestrator | 2025-09-13 00:44:47.827451 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-13 00:44:47.827500 | orchestrator | Saturday 13 September 2025 00:44:46 +0000 (0:00:00.146) 0:00:45.543 **** 2025-09-13 00:44:47.827513 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:44:47.827524 | orchestrator | 2025-09-13 00:44:47.827535 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-13 00:44:47.827546 | orchestrator | Saturday 13 September 2025 00:44:46 +0000 (0:00:00.519) 0:00:46.063 **** 2025-09-13 00:44:47.827557 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:44:47.827567 | orchestrator | 2025-09-13 00:44:47.827578 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-13 00:44:47.827589 | orchestrator | Saturday 13 September 2025 00:44:47 +0000 (0:00:00.517) 0:00:46.580 **** 2025-09-13 00:44:47.827600 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:44:47.827611 | orchestrator | 2025-09-13 00:44:47.827622 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-13 00:44:47.827633 | orchestrator | Saturday 13 September 2025 00:44:47 +0000 (0:00:00.130) 0:00:46.710 **** 2025-09-13 00:44:47.827643 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-b087737a-96b5-5170-ab1c-c312068a0bca', 'vg_name': 'ceph-b087737a-96b5-5170-ab1c-c312068a0bca'}) 2025-09-13 00:44:47.827655 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-b9d4bd55-4398-5073-b181-64dcd216e500', 'vg_name': 'ceph-b9d4bd55-4398-5073-b181-64dcd216e500'}) 2025-09-13 00:44:47.827666 | orchestrator | 2025-09-13 00:44:47.827677 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-13 00:44:47.827688 | orchestrator | Saturday 13 September 2025 00:44:47 +0000 (0:00:00.170) 0:00:46.881 **** 2025-09-13 00:44:47.827698 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9d4bd55-4398-5073-b181-64dcd216e500', 'data_vg': 'ceph-b9d4bd55-4398-5073-b181-64dcd216e500'})  2025-09-13 00:44:47.827709 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b087737a-96b5-5170-ab1c-c312068a0bca', 'data_vg': 'ceph-b087737a-96b5-5170-ab1c-c312068a0bca'})  2025-09-13 00:44:47.827720 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:47.827731 | orchestrator | 2025-09-13 00:44:47.827742 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-13 00:44:47.827753 | orchestrator | Saturday 13 September 2025 00:44:47 +0000 (0:00:00.158) 0:00:47.039 **** 2025-09-13 00:44:47.827764 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9d4bd55-4398-5073-b181-64dcd216e500', 'data_vg': 'ceph-b9d4bd55-4398-5073-b181-64dcd216e500'})  2025-09-13 00:44:47.827775 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b087737a-96b5-5170-ab1c-c312068a0bca', 'data_vg': 'ceph-b087737a-96b5-5170-ab1c-c312068a0bca'})  2025-09-13 00:44:47.827793 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:54.039292 | orchestrator | 2025-09-13 00:44:54.039402 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-13 00:44:54.039419 | orchestrator | Saturday 13 September 2025 00:44:47 +0000 (0:00:00.148) 0:00:47.188 **** 2025-09-13 00:44:54.039432 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9d4bd55-4398-5073-b181-64dcd216e500', 'data_vg': 'ceph-b9d4bd55-4398-5073-b181-64dcd216e500'})  2025-09-13 00:44:54.039445 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b087737a-96b5-5170-ab1c-c312068a0bca', 'data_vg': 'ceph-b087737a-96b5-5170-ab1c-c312068a0bca'})  2025-09-13 00:44:54.039456 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:44:54.039468 | orchestrator | 2025-09-13 00:44:54.039480 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-13 00:44:54.039491 | orchestrator | Saturday 13 September 2025 00:44:47 +0000 (0:00:00.158) 0:00:47.347 **** 2025-09-13 00:44:54.039524 | orchestrator | ok: [testbed-node-4] => { 2025-09-13 00:44:54.039536 | orchestrator |  "lvm_report": { 2025-09-13 00:44:54.039549 | orchestrator |  "lv": [ 2025-09-13 00:44:54.039560 | orchestrator |  { 2025-09-13 00:44:54.039572 | orchestrator |  "lv_name": "osd-block-b087737a-96b5-5170-ab1c-c312068a0bca", 2025-09-13 00:44:54.039583 | orchestrator |  "vg_name": "ceph-b087737a-96b5-5170-ab1c-c312068a0bca" 2025-09-13 00:44:54.039595 | orchestrator |  }, 2025-09-13 00:44:54.039605 | orchestrator |  { 2025-09-13 00:44:54.039617 | orchestrator |  "lv_name": "osd-block-b9d4bd55-4398-5073-b181-64dcd216e500", 2025-09-13 00:44:54.039628 | orchestrator |  "vg_name": "ceph-b9d4bd55-4398-5073-b181-64dcd216e500" 2025-09-13 00:44:54.039638 | orchestrator |  } 2025-09-13 00:44:54.039649 | orchestrator |  ], 2025-09-13 00:44:54.039661 | orchestrator |  "pv": [ 2025-09-13 00:44:54.039672 | orchestrator |  { 2025-09-13 00:44:54.039683 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-13 00:44:54.039694 | orchestrator |  "vg_name": "ceph-b9d4bd55-4398-5073-b181-64dcd216e500" 2025-09-13 00:44:54.039704 | orchestrator |  }, 2025-09-13 00:44:54.039716 | orchestrator |  { 2025-09-13 00:44:54.039726 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-13 00:44:54.039738 | orchestrator |  "vg_name": "ceph-b087737a-96b5-5170-ab1c-c312068a0bca" 2025-09-13 00:44:54.039749 | orchestrator |  } 2025-09-13 00:44:54.039760 | orchestrator |  ] 2025-09-13 00:44:54.039770 | orchestrator |  } 2025-09-13 00:44:54.039782 | orchestrator | } 2025-09-13 00:44:54.039794 | orchestrator | 2025-09-13 00:44:54.039808 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-13 00:44:54.039820 | orchestrator | 2025-09-13 00:44:54.039832 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-13 00:44:54.039845 | orchestrator | Saturday 13 September 2025 00:44:48 +0000 (0:00:00.502) 0:00:47.849 **** 2025-09-13 00:44:54.039857 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-13 00:44:54.039893 | orchestrator | 2025-09-13 00:44:54.039924 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-13 00:44:54.039937 | orchestrator | Saturday 13 September 2025 00:44:48 +0000 (0:00:00.264) 0:00:48.114 **** 2025-09-13 00:44:54.039950 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:44:54.039963 | orchestrator | 2025-09-13 00:44:54.039976 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:54.039989 | orchestrator | Saturday 13 September 2025 00:44:48 +0000 (0:00:00.240) 0:00:48.354 **** 2025-09-13 00:44:54.040001 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-13 00:44:54.040014 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-13 00:44:54.040027 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-13 00:44:54.040039 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-13 00:44:54.040052 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-13 00:44:54.040064 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-13 00:44:54.040077 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-13 00:44:54.040090 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-13 00:44:54.040103 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-13 00:44:54.040115 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-13 00:44:54.040127 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-13 00:44:54.040147 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-13 00:44:54.040158 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-13 00:44:54.040169 | orchestrator | 2025-09-13 00:44:54.040180 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:54.040191 | orchestrator | Saturday 13 September 2025 00:44:49 +0000 (0:00:00.440) 0:00:48.795 **** 2025-09-13 00:44:54.040202 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:44:54.040218 | orchestrator | 2025-09-13 00:44:54.040229 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:54.040240 | orchestrator | Saturday 13 September 2025 00:44:49 +0000 (0:00:00.203) 0:00:48.999 **** 2025-09-13 00:44:54.040251 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:44:54.040262 | orchestrator | 2025-09-13 00:44:54.040273 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:54.040302 | orchestrator | Saturday 13 September 2025 00:44:49 +0000 (0:00:00.198) 0:00:49.198 **** 2025-09-13 00:44:54.040314 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:44:54.040325 | orchestrator | 2025-09-13 00:44:54.040336 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:54.040347 | orchestrator | Saturday 13 September 2025 00:44:50 +0000 (0:00:00.201) 0:00:49.399 **** 2025-09-13 00:44:54.040358 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:44:54.040369 | orchestrator | 2025-09-13 00:44:54.040380 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:54.040391 | orchestrator | Saturday 13 September 2025 00:44:50 +0000 (0:00:00.199) 0:00:49.598 **** 2025-09-13 00:44:54.040402 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:44:54.040413 | orchestrator | 2025-09-13 00:44:54.040424 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:54.040435 | orchestrator | Saturday 13 September 2025 00:44:50 +0000 (0:00:00.196) 0:00:49.795 **** 2025-09-13 00:44:54.040445 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:44:54.040456 | orchestrator | 2025-09-13 00:44:54.040467 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:54.040478 | orchestrator | Saturday 13 September 2025 00:44:51 +0000 (0:00:00.665) 0:00:50.460 **** 2025-09-13 00:44:54.040489 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:44:54.040500 | orchestrator | 2025-09-13 00:44:54.040511 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:54.040522 | orchestrator | Saturday 13 September 2025 00:44:51 +0000 (0:00:00.226) 0:00:50.687 **** 2025-09-13 00:44:54.040533 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:44:54.040544 | orchestrator | 2025-09-13 00:44:54.040555 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:54.040566 | orchestrator | Saturday 13 September 2025 00:44:51 +0000 (0:00:00.230) 0:00:50.918 **** 2025-09-13 00:44:54.040577 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96) 2025-09-13 00:44:54.040589 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96) 2025-09-13 00:44:54.040600 | orchestrator | 2025-09-13 00:44:54.040611 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:54.040622 | orchestrator | Saturday 13 September 2025 00:44:51 +0000 (0:00:00.442) 0:00:51.360 **** 2025-09-13 00:44:54.040633 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1763dbba-d504-4b6d-865a-93cad2d65fc8) 2025-09-13 00:44:54.040644 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1763dbba-d504-4b6d-865a-93cad2d65fc8) 2025-09-13 00:44:54.040655 | orchestrator | 2025-09-13 00:44:54.040666 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:54.040677 | orchestrator | Saturday 13 September 2025 00:44:52 +0000 (0:00:00.435) 0:00:51.796 **** 2025-09-13 00:44:54.040702 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c5da3e8c-99b7-4761-a17c-7637f0eb6556) 2025-09-13 00:44:54.040714 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c5da3e8c-99b7-4761-a17c-7637f0eb6556) 2025-09-13 00:44:54.040725 | orchestrator | 2025-09-13 00:44:54.040736 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:54.040747 | orchestrator | Saturday 13 September 2025 00:44:52 +0000 (0:00:00.451) 0:00:52.247 **** 2025-09-13 00:44:54.040757 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9346358d-8291-41dd-be96-0d8c84c54113) 2025-09-13 00:44:54.040768 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9346358d-8291-41dd-be96-0d8c84c54113) 2025-09-13 00:44:54.040779 | orchestrator | 2025-09-13 00:44:54.040790 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-13 00:44:54.040801 | orchestrator | Saturday 13 September 2025 00:44:53 +0000 (0:00:00.425) 0:00:52.673 **** 2025-09-13 00:44:54.040812 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-13 00:44:54.040823 | orchestrator | 2025-09-13 00:44:54.040834 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:44:54.040845 | orchestrator | Saturday 13 September 2025 00:44:53 +0000 (0:00:00.313) 0:00:52.986 **** 2025-09-13 00:44:54.040856 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-13 00:44:54.040867 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-13 00:44:54.040896 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-13 00:44:54.040907 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-13 00:44:54.040918 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-13 00:44:54.040929 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-13 00:44:54.040940 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-13 00:44:54.040951 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-13 00:44:54.040962 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-13 00:44:54.040973 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-13 00:44:54.040984 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-13 00:44:54.041001 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-13 00:45:02.398162 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-13 00:45:02.398267 | orchestrator | 2025-09-13 00:45:02.398280 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:45:02.398290 | orchestrator | Saturday 13 September 2025 00:44:54 +0000 (0:00:00.407) 0:00:53.393 **** 2025-09-13 00:45:02.398299 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:02.398309 | orchestrator | 2025-09-13 00:45:02.398319 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:45:02.398328 | orchestrator | Saturday 13 September 2025 00:44:54 +0000 (0:00:00.195) 0:00:53.589 **** 2025-09-13 00:45:02.398337 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:02.398346 | orchestrator | 2025-09-13 00:45:02.398354 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:45:02.398363 | orchestrator | Saturday 13 September 2025 00:44:54 +0000 (0:00:00.167) 0:00:53.757 **** 2025-09-13 00:45:02.398372 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:02.398381 | orchestrator | 2025-09-13 00:45:02.398390 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:45:02.398421 | orchestrator | Saturday 13 September 2025 00:44:54 +0000 (0:00:00.477) 0:00:54.235 **** 2025-09-13 00:45:02.398430 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:02.398439 | orchestrator | 2025-09-13 00:45:02.398448 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:45:02.398456 | orchestrator | Saturday 13 September 2025 00:44:55 +0000 (0:00:00.186) 0:00:54.421 **** 2025-09-13 00:45:02.398465 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:02.398474 | orchestrator | 2025-09-13 00:45:02.398482 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:45:02.398491 | orchestrator | Saturday 13 September 2025 00:44:55 +0000 (0:00:00.177) 0:00:54.599 **** 2025-09-13 00:45:02.398500 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:02.398509 | orchestrator | 2025-09-13 00:45:02.398517 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:45:02.398526 | orchestrator | Saturday 13 September 2025 00:44:55 +0000 (0:00:00.270) 0:00:54.869 **** 2025-09-13 00:45:02.398535 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:02.398544 | orchestrator | 2025-09-13 00:45:02.398552 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:45:02.398561 | orchestrator | Saturday 13 September 2025 00:44:55 +0000 (0:00:00.180) 0:00:55.049 **** 2025-09-13 00:45:02.398570 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:02.398578 | orchestrator | 2025-09-13 00:45:02.398587 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:45:02.398596 | orchestrator | Saturday 13 September 2025 00:44:55 +0000 (0:00:00.178) 0:00:55.228 **** 2025-09-13 00:45:02.398605 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-13 00:45:02.398614 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-13 00:45:02.398624 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-13 00:45:02.398632 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-13 00:45:02.398641 | orchestrator | 2025-09-13 00:45:02.398650 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:45:02.398658 | orchestrator | Saturday 13 September 2025 00:44:56 +0000 (0:00:00.661) 0:00:55.890 **** 2025-09-13 00:45:02.398667 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:02.398676 | orchestrator | 2025-09-13 00:45:02.398685 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:45:02.398694 | orchestrator | Saturday 13 September 2025 00:44:56 +0000 (0:00:00.185) 0:00:56.075 **** 2025-09-13 00:45:02.398703 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:02.398711 | orchestrator | 2025-09-13 00:45:02.398721 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:45:02.398730 | orchestrator | Saturday 13 September 2025 00:44:56 +0000 (0:00:00.167) 0:00:56.242 **** 2025-09-13 00:45:02.398738 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:02.398747 | orchestrator | 2025-09-13 00:45:02.398756 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-13 00:45:02.398764 | orchestrator | Saturday 13 September 2025 00:44:57 +0000 (0:00:00.178) 0:00:56.421 **** 2025-09-13 00:45:02.398773 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:02.398782 | orchestrator | 2025-09-13 00:45:02.398791 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-13 00:45:02.398800 | orchestrator | Saturday 13 September 2025 00:44:57 +0000 (0:00:00.187) 0:00:56.608 **** 2025-09-13 00:45:02.398808 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:02.398817 | orchestrator | 2025-09-13 00:45:02.398826 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-13 00:45:02.398835 | orchestrator | Saturday 13 September 2025 00:44:57 +0000 (0:00:00.284) 0:00:56.892 **** 2025-09-13 00:45:02.398843 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4283f495-c022-53d0-a3fe-4c36d70cad8f'}}) 2025-09-13 00:45:02.398853 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7339ba9f-b6a9-52d7-bde1-e21ae438ff7a'}}) 2025-09-13 00:45:02.398868 | orchestrator | 2025-09-13 00:45:02.398938 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-13 00:45:02.398948 | orchestrator | Saturday 13 September 2025 00:44:57 +0000 (0:00:00.169) 0:00:57.062 **** 2025-09-13 00:45:02.398959 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4283f495-c022-53d0-a3fe-4c36d70cad8f', 'data_vg': 'ceph-4283f495-c022-53d0-a3fe-4c36d70cad8f'}) 2025-09-13 00:45:02.398969 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a', 'data_vg': 'ceph-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a'}) 2025-09-13 00:45:02.398978 | orchestrator | 2025-09-13 00:45:02.398987 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-13 00:45:02.399012 | orchestrator | Saturday 13 September 2025 00:44:59 +0000 (0:00:01.839) 0:00:58.902 **** 2025-09-13 00:45:02.399021 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4283f495-c022-53d0-a3fe-4c36d70cad8f', 'data_vg': 'ceph-4283f495-c022-53d0-a3fe-4c36d70cad8f'})  2025-09-13 00:45:02.399032 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a', 'data_vg': 'ceph-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a'})  2025-09-13 00:45:02.399041 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:02.399050 | orchestrator | 2025-09-13 00:45:02.399059 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-13 00:45:02.399068 | orchestrator | Saturday 13 September 2025 00:44:59 +0000 (0:00:00.140) 0:00:59.042 **** 2025-09-13 00:45:02.399076 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4283f495-c022-53d0-a3fe-4c36d70cad8f', 'data_vg': 'ceph-4283f495-c022-53d0-a3fe-4c36d70cad8f'}) 2025-09-13 00:45:02.399104 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a', 'data_vg': 'ceph-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a'}) 2025-09-13 00:45:02.399114 | orchestrator | 2025-09-13 00:45:02.399123 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-13 00:45:02.399132 | orchestrator | Saturday 13 September 2025 00:45:01 +0000 (0:00:01.333) 0:01:00.376 **** 2025-09-13 00:45:02.399153 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4283f495-c022-53d0-a3fe-4c36d70cad8f', 'data_vg': 'ceph-4283f495-c022-53d0-a3fe-4c36d70cad8f'})  2025-09-13 00:45:02.399162 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a', 'data_vg': 'ceph-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a'})  2025-09-13 00:45:02.399179 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:02.399188 | orchestrator | 2025-09-13 00:45:02.399197 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-13 00:45:02.399206 | orchestrator | Saturday 13 September 2025 00:45:01 +0000 (0:00:00.129) 0:01:00.505 **** 2025-09-13 00:45:02.399215 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:02.399223 | orchestrator | 2025-09-13 00:45:02.399232 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-13 00:45:02.399241 | orchestrator | Saturday 13 September 2025 00:45:01 +0000 (0:00:00.128) 0:01:00.634 **** 2025-09-13 00:45:02.399250 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4283f495-c022-53d0-a3fe-4c36d70cad8f', 'data_vg': 'ceph-4283f495-c022-53d0-a3fe-4c36d70cad8f'})  2025-09-13 00:45:02.399263 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a', 'data_vg': 'ceph-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a'})  2025-09-13 00:45:02.399272 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:02.399281 | orchestrator | 2025-09-13 00:45:02.399290 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-13 00:45:02.399299 | orchestrator | Saturday 13 September 2025 00:45:01 +0000 (0:00:00.140) 0:01:00.774 **** 2025-09-13 00:45:02.399307 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:02.399323 | orchestrator | 2025-09-13 00:45:02.399332 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-13 00:45:02.399340 | orchestrator | Saturday 13 September 2025 00:45:01 +0000 (0:00:00.108) 0:01:00.883 **** 2025-09-13 00:45:02.399349 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4283f495-c022-53d0-a3fe-4c36d70cad8f', 'data_vg': 'ceph-4283f495-c022-53d0-a3fe-4c36d70cad8f'})  2025-09-13 00:45:02.399358 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a', 'data_vg': 'ceph-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a'})  2025-09-13 00:45:02.399367 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:02.399375 | orchestrator | 2025-09-13 00:45:02.399384 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-13 00:45:02.399393 | orchestrator | Saturday 13 September 2025 00:45:01 +0000 (0:00:00.142) 0:01:01.026 **** 2025-09-13 00:45:02.399402 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:02.399410 | orchestrator | 2025-09-13 00:45:02.399419 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-13 00:45:02.399428 | orchestrator | Saturday 13 September 2025 00:45:01 +0000 (0:00:00.127) 0:01:01.153 **** 2025-09-13 00:45:02.399436 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4283f495-c022-53d0-a3fe-4c36d70cad8f', 'data_vg': 'ceph-4283f495-c022-53d0-a3fe-4c36d70cad8f'})  2025-09-13 00:45:02.399445 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a', 'data_vg': 'ceph-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a'})  2025-09-13 00:45:02.399454 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:02.399463 | orchestrator | 2025-09-13 00:45:02.399472 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-13 00:45:02.399480 | orchestrator | Saturday 13 September 2025 00:45:01 +0000 (0:00:00.141) 0:01:01.295 **** 2025-09-13 00:45:02.399489 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:45:02.399498 | orchestrator | 2025-09-13 00:45:02.399507 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-13 00:45:02.399515 | orchestrator | Saturday 13 September 2025 00:45:02 +0000 (0:00:00.276) 0:01:01.571 **** 2025-09-13 00:45:02.399530 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4283f495-c022-53d0-a3fe-4c36d70cad8f', 'data_vg': 'ceph-4283f495-c022-53d0-a3fe-4c36d70cad8f'})  2025-09-13 00:45:07.988327 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a', 'data_vg': 'ceph-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a'})  2025-09-13 00:45:07.988404 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:07.988416 | orchestrator | 2025-09-13 00:45:07.988428 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-13 00:45:07.988440 | orchestrator | Saturday 13 September 2025 00:45:02 +0000 (0:00:00.190) 0:01:01.762 **** 2025-09-13 00:45:07.988451 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4283f495-c022-53d0-a3fe-4c36d70cad8f', 'data_vg': 'ceph-4283f495-c022-53d0-a3fe-4c36d70cad8f'})  2025-09-13 00:45:07.988462 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a', 'data_vg': 'ceph-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a'})  2025-09-13 00:45:07.988473 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:07.988484 | orchestrator | 2025-09-13 00:45:07.988496 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-13 00:45:07.988506 | orchestrator | Saturday 13 September 2025 00:45:02 +0000 (0:00:00.135) 0:01:01.897 **** 2025-09-13 00:45:07.988517 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4283f495-c022-53d0-a3fe-4c36d70cad8f', 'data_vg': 'ceph-4283f495-c022-53d0-a3fe-4c36d70cad8f'})  2025-09-13 00:45:07.988528 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a', 'data_vg': 'ceph-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a'})  2025-09-13 00:45:07.988539 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:07.988569 | orchestrator | 2025-09-13 00:45:07.988581 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-13 00:45:07.988591 | orchestrator | Saturday 13 September 2025 00:45:02 +0000 (0:00:00.133) 0:01:02.031 **** 2025-09-13 00:45:07.988602 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:07.988613 | orchestrator | 2025-09-13 00:45:07.988624 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-13 00:45:07.988634 | orchestrator | Saturday 13 September 2025 00:45:02 +0000 (0:00:00.125) 0:01:02.156 **** 2025-09-13 00:45:07.988645 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:07.988656 | orchestrator | 2025-09-13 00:45:07.988667 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-13 00:45:07.988678 | orchestrator | Saturday 13 September 2025 00:45:02 +0000 (0:00:00.126) 0:01:02.282 **** 2025-09-13 00:45:07.988689 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:07.988700 | orchestrator | 2025-09-13 00:45:07.988710 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-13 00:45:07.988735 | orchestrator | Saturday 13 September 2025 00:45:03 +0000 (0:00:00.116) 0:01:02.399 **** 2025-09-13 00:45:07.988747 | orchestrator | ok: [testbed-node-5] => { 2025-09-13 00:45:07.988758 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-13 00:45:07.988769 | orchestrator | } 2025-09-13 00:45:07.988780 | orchestrator | 2025-09-13 00:45:07.988791 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-13 00:45:07.988802 | orchestrator | Saturday 13 September 2025 00:45:03 +0000 (0:00:00.134) 0:01:02.533 **** 2025-09-13 00:45:07.988813 | orchestrator | ok: [testbed-node-5] => { 2025-09-13 00:45:07.988823 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-13 00:45:07.988834 | orchestrator | } 2025-09-13 00:45:07.988845 | orchestrator | 2025-09-13 00:45:07.988856 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-13 00:45:07.988867 | orchestrator | Saturday 13 September 2025 00:45:03 +0000 (0:00:00.121) 0:01:02.654 **** 2025-09-13 00:45:07.988878 | orchestrator | ok: [testbed-node-5] => { 2025-09-13 00:45:07.988921 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-13 00:45:07.988934 | orchestrator | } 2025-09-13 00:45:07.988947 | orchestrator | 2025-09-13 00:45:07.988959 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-13 00:45:07.988972 | orchestrator | Saturday 13 September 2025 00:45:03 +0000 (0:00:00.146) 0:01:02.800 **** 2025-09-13 00:45:07.988985 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:45:07.988997 | orchestrator | 2025-09-13 00:45:07.989009 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-13 00:45:07.989022 | orchestrator | Saturday 13 September 2025 00:45:03 +0000 (0:00:00.510) 0:01:03.311 **** 2025-09-13 00:45:07.989034 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:45:07.989046 | orchestrator | 2025-09-13 00:45:07.989059 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-13 00:45:07.989071 | orchestrator | Saturday 13 September 2025 00:45:04 +0000 (0:00:00.512) 0:01:03.824 **** 2025-09-13 00:45:07.989083 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:45:07.989097 | orchestrator | 2025-09-13 00:45:07.989110 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-13 00:45:07.989123 | orchestrator | Saturday 13 September 2025 00:45:05 +0000 (0:00:00.666) 0:01:04.490 **** 2025-09-13 00:45:07.989135 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:45:07.989148 | orchestrator | 2025-09-13 00:45:07.989160 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-13 00:45:07.989172 | orchestrator | Saturday 13 September 2025 00:45:05 +0000 (0:00:00.150) 0:01:04.641 **** 2025-09-13 00:45:07.989185 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:07.989197 | orchestrator | 2025-09-13 00:45:07.989209 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-13 00:45:07.989222 | orchestrator | Saturday 13 September 2025 00:45:05 +0000 (0:00:00.132) 0:01:04.773 **** 2025-09-13 00:45:07.989243 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:07.989254 | orchestrator | 2025-09-13 00:45:07.989265 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-13 00:45:07.989275 | orchestrator | Saturday 13 September 2025 00:45:05 +0000 (0:00:00.090) 0:01:04.863 **** 2025-09-13 00:45:07.989286 | orchestrator | ok: [testbed-node-5] => { 2025-09-13 00:45:07.989297 | orchestrator |  "vgs_report": { 2025-09-13 00:45:07.989308 | orchestrator |  "vg": [] 2025-09-13 00:45:07.989335 | orchestrator |  } 2025-09-13 00:45:07.989346 | orchestrator | } 2025-09-13 00:45:07.989357 | orchestrator | 2025-09-13 00:45:07.989368 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-13 00:45:07.989379 | orchestrator | Saturday 13 September 2025 00:45:05 +0000 (0:00:00.147) 0:01:05.011 **** 2025-09-13 00:45:07.989390 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:07.989401 | orchestrator | 2025-09-13 00:45:07.989411 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-13 00:45:07.989422 | orchestrator | Saturday 13 September 2025 00:45:05 +0000 (0:00:00.125) 0:01:05.137 **** 2025-09-13 00:45:07.989433 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:07.989444 | orchestrator | 2025-09-13 00:45:07.989455 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-13 00:45:07.989465 | orchestrator | Saturday 13 September 2025 00:45:05 +0000 (0:00:00.114) 0:01:05.251 **** 2025-09-13 00:45:07.989476 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:07.989487 | orchestrator | 2025-09-13 00:45:07.989498 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-13 00:45:07.989508 | orchestrator | Saturday 13 September 2025 00:45:05 +0000 (0:00:00.110) 0:01:05.361 **** 2025-09-13 00:45:07.989519 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:07.989530 | orchestrator | 2025-09-13 00:45:07.989541 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-13 00:45:07.989552 | orchestrator | Saturday 13 September 2025 00:45:06 +0000 (0:00:00.122) 0:01:05.484 **** 2025-09-13 00:45:07.989563 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:07.989574 | orchestrator | 2025-09-13 00:45:07.989584 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-13 00:45:07.989595 | orchestrator | Saturday 13 September 2025 00:45:06 +0000 (0:00:00.115) 0:01:05.600 **** 2025-09-13 00:45:07.989606 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:07.989617 | orchestrator | 2025-09-13 00:45:07.989627 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-13 00:45:07.989638 | orchestrator | Saturday 13 September 2025 00:45:06 +0000 (0:00:00.117) 0:01:05.718 **** 2025-09-13 00:45:07.989649 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:07.989659 | orchestrator | 2025-09-13 00:45:07.989670 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-13 00:45:07.989681 | orchestrator | Saturday 13 September 2025 00:45:06 +0000 (0:00:00.115) 0:01:05.833 **** 2025-09-13 00:45:07.989692 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:07.989702 | orchestrator | 2025-09-13 00:45:07.989713 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-13 00:45:07.989724 | orchestrator | Saturday 13 September 2025 00:45:06 +0000 (0:00:00.122) 0:01:05.956 **** 2025-09-13 00:45:07.989735 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:07.989746 | orchestrator | 2025-09-13 00:45:07.989757 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-13 00:45:07.989773 | orchestrator | Saturday 13 September 2025 00:45:06 +0000 (0:00:00.349) 0:01:06.305 **** 2025-09-13 00:45:07.989784 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:07.989795 | orchestrator | 2025-09-13 00:45:07.989806 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-13 00:45:07.989817 | orchestrator | Saturday 13 September 2025 00:45:07 +0000 (0:00:00.121) 0:01:06.427 **** 2025-09-13 00:45:07.989827 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:07.989845 | orchestrator | 2025-09-13 00:45:07.989856 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-13 00:45:07.989867 | orchestrator | Saturday 13 September 2025 00:45:07 +0000 (0:00:00.135) 0:01:06.563 **** 2025-09-13 00:45:07.989878 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:07.989908 | orchestrator | 2025-09-13 00:45:07.989920 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-13 00:45:07.989931 | orchestrator | Saturday 13 September 2025 00:45:07 +0000 (0:00:00.125) 0:01:06.689 **** 2025-09-13 00:45:07.989941 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:07.989952 | orchestrator | 2025-09-13 00:45:07.989963 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-13 00:45:07.989974 | orchestrator | Saturday 13 September 2025 00:45:07 +0000 (0:00:00.102) 0:01:06.792 **** 2025-09-13 00:45:07.989985 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:07.989996 | orchestrator | 2025-09-13 00:45:07.990006 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-13 00:45:07.990046 | orchestrator | Saturday 13 September 2025 00:45:07 +0000 (0:00:00.109) 0:01:06.901 **** 2025-09-13 00:45:07.990060 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4283f495-c022-53d0-a3fe-4c36d70cad8f', 'data_vg': 'ceph-4283f495-c022-53d0-a3fe-4c36d70cad8f'})  2025-09-13 00:45:07.990071 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a', 'data_vg': 'ceph-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a'})  2025-09-13 00:45:07.990082 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:07.990093 | orchestrator | 2025-09-13 00:45:07.990133 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-13 00:45:07.990145 | orchestrator | Saturday 13 September 2025 00:45:07 +0000 (0:00:00.148) 0:01:07.050 **** 2025-09-13 00:45:07.990156 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4283f495-c022-53d0-a3fe-4c36d70cad8f', 'data_vg': 'ceph-4283f495-c022-53d0-a3fe-4c36d70cad8f'})  2025-09-13 00:45:07.990167 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a', 'data_vg': 'ceph-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a'})  2025-09-13 00:45:07.990178 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:07.990188 | orchestrator | 2025-09-13 00:45:07.990199 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-13 00:45:07.990210 | orchestrator | Saturday 13 September 2025 00:45:07 +0000 (0:00:00.142) 0:01:07.192 **** 2025-09-13 00:45:07.990230 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4283f495-c022-53d0-a3fe-4c36d70cad8f', 'data_vg': 'ceph-4283f495-c022-53d0-a3fe-4c36d70cad8f'})  2025-09-13 00:45:10.716194 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a', 'data_vg': 'ceph-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a'})  2025-09-13 00:45:10.716957 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:10.717005 | orchestrator | 2025-09-13 00:45:10.717023 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-13 00:45:10.717038 | orchestrator | Saturday 13 September 2025 00:45:07 +0000 (0:00:00.161) 0:01:07.354 **** 2025-09-13 00:45:10.717052 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4283f495-c022-53d0-a3fe-4c36d70cad8f', 'data_vg': 'ceph-4283f495-c022-53d0-a3fe-4c36d70cad8f'})  2025-09-13 00:45:10.717065 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a', 'data_vg': 'ceph-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a'})  2025-09-13 00:45:10.717078 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:10.717090 | orchestrator | 2025-09-13 00:45:10.717101 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-13 00:45:10.717112 | orchestrator | Saturday 13 September 2025 00:45:08 +0000 (0:00:00.128) 0:01:07.482 **** 2025-09-13 00:45:10.717122 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4283f495-c022-53d0-a3fe-4c36d70cad8f', 'data_vg': 'ceph-4283f495-c022-53d0-a3fe-4c36d70cad8f'})  2025-09-13 00:45:10.717153 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a', 'data_vg': 'ceph-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a'})  2025-09-13 00:45:10.717164 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:10.717175 | orchestrator | 2025-09-13 00:45:10.717185 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-13 00:45:10.717196 | orchestrator | Saturday 13 September 2025 00:45:08 +0000 (0:00:00.155) 0:01:07.638 **** 2025-09-13 00:45:10.717206 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4283f495-c022-53d0-a3fe-4c36d70cad8f', 'data_vg': 'ceph-4283f495-c022-53d0-a3fe-4c36d70cad8f'})  2025-09-13 00:45:10.717217 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a', 'data_vg': 'ceph-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a'})  2025-09-13 00:45:10.717228 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:10.717239 | orchestrator | 2025-09-13 00:45:10.717250 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-13 00:45:10.717261 | orchestrator | Saturday 13 September 2025 00:45:08 +0000 (0:00:00.127) 0:01:07.765 **** 2025-09-13 00:45:10.717271 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4283f495-c022-53d0-a3fe-4c36d70cad8f', 'data_vg': 'ceph-4283f495-c022-53d0-a3fe-4c36d70cad8f'})  2025-09-13 00:45:10.717282 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a', 'data_vg': 'ceph-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a'})  2025-09-13 00:45:10.717293 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:10.717304 | orchestrator | 2025-09-13 00:45:10.717314 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-13 00:45:10.717325 | orchestrator | Saturday 13 September 2025 00:45:08 +0000 (0:00:00.274) 0:01:08.040 **** 2025-09-13 00:45:10.717336 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4283f495-c022-53d0-a3fe-4c36d70cad8f', 'data_vg': 'ceph-4283f495-c022-53d0-a3fe-4c36d70cad8f'})  2025-09-13 00:45:10.717347 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a', 'data_vg': 'ceph-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a'})  2025-09-13 00:45:10.717358 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:10.717368 | orchestrator | 2025-09-13 00:45:10.717379 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-13 00:45:10.717390 | orchestrator | Saturday 13 September 2025 00:45:08 +0000 (0:00:00.130) 0:01:08.170 **** 2025-09-13 00:45:10.717400 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:45:10.717412 | orchestrator | 2025-09-13 00:45:10.717422 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-13 00:45:10.717433 | orchestrator | Saturday 13 September 2025 00:45:09 +0000 (0:00:00.473) 0:01:08.644 **** 2025-09-13 00:45:10.717444 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:45:10.717454 | orchestrator | 2025-09-13 00:45:10.717465 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-13 00:45:10.717476 | orchestrator | Saturday 13 September 2025 00:45:09 +0000 (0:00:00.502) 0:01:09.146 **** 2025-09-13 00:45:10.717486 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:45:10.717497 | orchestrator | 2025-09-13 00:45:10.717508 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-13 00:45:10.717518 | orchestrator | Saturday 13 September 2025 00:45:09 +0000 (0:00:00.153) 0:01:09.300 **** 2025-09-13 00:45:10.717529 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-4283f495-c022-53d0-a3fe-4c36d70cad8f', 'vg_name': 'ceph-4283f495-c022-53d0-a3fe-4c36d70cad8f'}) 2025-09-13 00:45:10.717540 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a', 'vg_name': 'ceph-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a'}) 2025-09-13 00:45:10.717551 | orchestrator | 2025-09-13 00:45:10.717561 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-13 00:45:10.717579 | orchestrator | Saturday 13 September 2025 00:45:10 +0000 (0:00:00.169) 0:01:09.470 **** 2025-09-13 00:45:10.717609 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4283f495-c022-53d0-a3fe-4c36d70cad8f', 'data_vg': 'ceph-4283f495-c022-53d0-a3fe-4c36d70cad8f'})  2025-09-13 00:45:10.717621 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a', 'data_vg': 'ceph-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a'})  2025-09-13 00:45:10.717632 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:10.717643 | orchestrator | 2025-09-13 00:45:10.717654 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-13 00:45:10.717665 | orchestrator | Saturday 13 September 2025 00:45:10 +0000 (0:00:00.139) 0:01:09.609 **** 2025-09-13 00:45:10.717675 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4283f495-c022-53d0-a3fe-4c36d70cad8f', 'data_vg': 'ceph-4283f495-c022-53d0-a3fe-4c36d70cad8f'})  2025-09-13 00:45:10.717686 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a', 'data_vg': 'ceph-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a'})  2025-09-13 00:45:10.717697 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:10.717708 | orchestrator | 2025-09-13 00:45:10.717719 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-13 00:45:10.717730 | orchestrator | Saturday 13 September 2025 00:45:10 +0000 (0:00:00.152) 0:01:09.761 **** 2025-09-13 00:45:10.717740 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4283f495-c022-53d0-a3fe-4c36d70cad8f', 'data_vg': 'ceph-4283f495-c022-53d0-a3fe-4c36d70cad8f'})  2025-09-13 00:45:10.717767 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a', 'data_vg': 'ceph-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a'})  2025-09-13 00:45:10.717778 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:10.717789 | orchestrator | 2025-09-13 00:45:10.717800 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-13 00:45:10.717811 | orchestrator | Saturday 13 September 2025 00:45:10 +0000 (0:00:00.162) 0:01:09.923 **** 2025-09-13 00:45:10.717821 | orchestrator | ok: [testbed-node-5] => { 2025-09-13 00:45:10.717832 | orchestrator |  "lvm_report": { 2025-09-13 00:45:10.717843 | orchestrator |  "lv": [ 2025-09-13 00:45:10.717853 | orchestrator |  { 2025-09-13 00:45:10.717864 | orchestrator |  "lv_name": "osd-block-4283f495-c022-53d0-a3fe-4c36d70cad8f", 2025-09-13 00:45:10.717880 | orchestrator |  "vg_name": "ceph-4283f495-c022-53d0-a3fe-4c36d70cad8f" 2025-09-13 00:45:10.717913 | orchestrator |  }, 2025-09-13 00:45:10.717924 | orchestrator |  { 2025-09-13 00:45:10.717935 | orchestrator |  "lv_name": "osd-block-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a", 2025-09-13 00:45:10.717946 | orchestrator |  "vg_name": "ceph-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a" 2025-09-13 00:45:10.717956 | orchestrator |  } 2025-09-13 00:45:10.717967 | orchestrator |  ], 2025-09-13 00:45:10.717978 | orchestrator |  "pv": [ 2025-09-13 00:45:10.717988 | orchestrator |  { 2025-09-13 00:45:10.717999 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-13 00:45:10.718009 | orchestrator |  "vg_name": "ceph-4283f495-c022-53d0-a3fe-4c36d70cad8f" 2025-09-13 00:45:10.718154 | orchestrator |  }, 2025-09-13 00:45:10.718178 | orchestrator |  { 2025-09-13 00:45:10.718197 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-13 00:45:10.718218 | orchestrator |  "vg_name": "ceph-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a" 2025-09-13 00:45:10.718236 | orchestrator |  } 2025-09-13 00:45:10.718255 | orchestrator |  ] 2025-09-13 00:45:10.718274 | orchestrator |  } 2025-09-13 00:45:10.718347 | orchestrator | } 2025-09-13 00:45:10.718366 | orchestrator | 2025-09-13 00:45:10.718385 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:45:10.718422 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-13 00:45:10.718441 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-13 00:45:10.718460 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-13 00:45:10.718474 | orchestrator | 2025-09-13 00:45:10.718484 | orchestrator | 2025-09-13 00:45:10.718495 | orchestrator | 2025-09-13 00:45:10.718505 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:45:10.718516 | orchestrator | Saturday 13 September 2025 00:45:10 +0000 (0:00:00.133) 0:01:10.057 **** 2025-09-13 00:45:10.718527 | orchestrator | =============================================================================== 2025-09-13 00:45:10.718538 | orchestrator | Create block VGs -------------------------------------------------------- 5.63s 2025-09-13 00:45:10.718548 | orchestrator | Create block LVs -------------------------------------------------------- 4.12s 2025-09-13 00:45:10.718559 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.84s 2025-09-13 00:45:10.718569 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.65s 2025-09-13 00:45:10.718580 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.58s 2025-09-13 00:45:10.718591 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.50s 2025-09-13 00:45:10.718601 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.50s 2025-09-13 00:45:10.718612 | orchestrator | Add known partitions to the list of available block devices ------------- 1.40s 2025-09-13 00:45:10.718635 | orchestrator | Add known links to the list of available block devices ------------------ 1.31s 2025-09-13 00:45:10.960436 | orchestrator | Print LVM report data --------------------------------------------------- 1.01s 2025-09-13 00:45:10.960518 | orchestrator | Add known partitions to the list of available block devices ------------- 0.85s 2025-09-13 00:45:10.960530 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.83s 2025-09-13 00:45:10.960542 | orchestrator | Add known partitions to the list of available block devices ------------- 0.83s 2025-09-13 00:45:10.960552 | orchestrator | Add known links to the list of available block devices ------------------ 0.82s 2025-09-13 00:45:10.960563 | orchestrator | Create DB+WAL VGs ------------------------------------------------------- 0.72s 2025-09-13 00:45:10.960574 | orchestrator | Get initial list of available block devices ----------------------------- 0.71s 2025-09-13 00:45:10.960585 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.69s 2025-09-13 00:45:10.960596 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.68s 2025-09-13 00:45:10.960606 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2025-09-13 00:45:10.960617 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.66s 2025-09-13 00:45:22.990942 | orchestrator | 2025-09-13 00:45:22 | INFO  | Task 73f41c72-6985-4951-ad0b-10504e3c0d98 (facts) was prepared for execution. 2025-09-13 00:45:22.991070 | orchestrator | 2025-09-13 00:45:22 | INFO  | It takes a moment until task 73f41c72-6985-4951-ad0b-10504e3c0d98 (facts) has been started and output is visible here. 2025-09-13 00:45:34.403565 | orchestrator | 2025-09-13 00:45:34.404350 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-13 00:45:34.404442 | orchestrator | 2025-09-13 00:45:34.404458 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-13 00:45:34.404472 | orchestrator | Saturday 13 September 2025 00:45:26 +0000 (0:00:00.249) 0:00:00.249 **** 2025-09-13 00:45:34.404483 | orchestrator | ok: [testbed-manager] 2025-09-13 00:45:34.404495 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:45:34.404534 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:45:34.404545 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:45:34.404556 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:45:34.404566 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:45:34.404577 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:45:34.404588 | orchestrator | 2025-09-13 00:45:34.404599 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-13 00:45:34.404610 | orchestrator | Saturday 13 September 2025 00:45:27 +0000 (0:00:00.965) 0:00:01.215 **** 2025-09-13 00:45:34.404635 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:45:34.404647 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:45:34.404660 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:45:34.404670 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:45:34.404681 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:45:34.404692 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:45:34.404702 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:34.404713 | orchestrator | 2025-09-13 00:45:34.404724 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-13 00:45:34.404735 | orchestrator | 2025-09-13 00:45:34.404745 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-13 00:45:34.404756 | orchestrator | Saturday 13 September 2025 00:45:28 +0000 (0:00:01.148) 0:00:02.363 **** 2025-09-13 00:45:34.404767 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:45:34.404777 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:45:34.404788 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:45:34.404799 | orchestrator | ok: [testbed-manager] 2025-09-13 00:45:34.404810 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:45:34.404820 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:45:34.404831 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:45:34.404841 | orchestrator | 2025-09-13 00:45:34.404852 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-13 00:45:34.404863 | orchestrator | 2025-09-13 00:45:34.404874 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-13 00:45:34.404884 | orchestrator | Saturday 13 September 2025 00:45:33 +0000 (0:00:04.805) 0:00:07.168 **** 2025-09-13 00:45:34.404895 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:45:34.404936 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:45:34.404948 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:45:34.404959 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:45:34.404969 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:45:34.404980 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:45:34.404991 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:45:34.405001 | orchestrator | 2025-09-13 00:45:34.405012 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:45:34.405024 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-13 00:45:34.405036 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-13 00:45:34.405047 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-13 00:45:34.405057 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-13 00:45:34.405068 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-13 00:45:34.405079 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-13 00:45:34.405090 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-13 00:45:34.405112 | orchestrator | 2025-09-13 00:45:34.405123 | orchestrator | 2025-09-13 00:45:34.405134 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:45:34.405145 | orchestrator | Saturday 13 September 2025 00:45:34 +0000 (0:00:00.451) 0:00:07.620 **** 2025-09-13 00:45:34.405156 | orchestrator | =============================================================================== 2025-09-13 00:45:34.405167 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.81s 2025-09-13 00:45:34.405177 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.15s 2025-09-13 00:45:34.405188 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.97s 2025-09-13 00:45:34.405199 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.45s 2025-09-13 00:45:46.475462 | orchestrator | 2025-09-13 00:45:46 | INFO  | Task c1e2352c-469f-4a24-ab50-acb21661e053 (frr) was prepared for execution. 2025-09-13 00:45:46.475581 | orchestrator | 2025-09-13 00:45:46 | INFO  | It takes a moment until task c1e2352c-469f-4a24-ab50-acb21661e053 (frr) has been started and output is visible here. 2025-09-13 00:46:11.854557 | orchestrator | 2025-09-13 00:46:11.854678 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-09-13 00:46:11.854696 | orchestrator | 2025-09-13 00:46:11.854709 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-09-13 00:46:11.854721 | orchestrator | Saturday 13 September 2025 00:45:50 +0000 (0:00:00.216) 0:00:00.216 **** 2025-09-13 00:46:11.854733 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-09-13 00:46:11.854745 | orchestrator | 2025-09-13 00:46:11.854756 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-09-13 00:46:11.854767 | orchestrator | Saturday 13 September 2025 00:45:50 +0000 (0:00:00.205) 0:00:00.421 **** 2025-09-13 00:46:11.854779 | orchestrator | changed: [testbed-manager] 2025-09-13 00:46:11.854790 | orchestrator | 2025-09-13 00:46:11.854802 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-09-13 00:46:11.854813 | orchestrator | Saturday 13 September 2025 00:45:51 +0000 (0:00:01.039) 0:00:01.460 **** 2025-09-13 00:46:11.854824 | orchestrator | changed: [testbed-manager] 2025-09-13 00:46:11.854835 | orchestrator | 2025-09-13 00:46:11.854863 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-09-13 00:46:11.854875 | orchestrator | Saturday 13 September 2025 00:46:01 +0000 (0:00:09.780) 0:00:11.241 **** 2025-09-13 00:46:11.854886 | orchestrator | ok: [testbed-manager] 2025-09-13 00:46:11.854898 | orchestrator | 2025-09-13 00:46:11.854909 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-09-13 00:46:11.854920 | orchestrator | Saturday 13 September 2025 00:46:02 +0000 (0:00:01.277) 0:00:12.518 **** 2025-09-13 00:46:11.854931 | orchestrator | changed: [testbed-manager] 2025-09-13 00:46:11.854991 | orchestrator | 2025-09-13 00:46:11.855003 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-09-13 00:46:11.855014 | orchestrator | Saturday 13 September 2025 00:46:03 +0000 (0:00:00.987) 0:00:13.506 **** 2025-09-13 00:46:11.855025 | orchestrator | ok: [testbed-manager] 2025-09-13 00:46:11.855036 | orchestrator | 2025-09-13 00:46:11.855047 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-09-13 00:46:11.855059 | orchestrator | Saturday 13 September 2025 00:46:04 +0000 (0:00:01.159) 0:00:14.666 **** 2025-09-13 00:46:11.855070 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-13 00:46:11.855081 | orchestrator | 2025-09-13 00:46:11.855095 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-09-13 00:46:11.855107 | orchestrator | Saturday 13 September 2025 00:46:05 +0000 (0:00:00.837) 0:00:15.503 **** 2025-09-13 00:46:11.855120 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:46:11.855132 | orchestrator | 2025-09-13 00:46:11.855146 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-09-13 00:46:11.855184 | orchestrator | Saturday 13 September 2025 00:46:05 +0000 (0:00:00.156) 0:00:15.659 **** 2025-09-13 00:46:11.855198 | orchestrator | changed: [testbed-manager] 2025-09-13 00:46:11.855210 | orchestrator | 2025-09-13 00:46:11.855223 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-09-13 00:46:11.855235 | orchestrator | Saturday 13 September 2025 00:46:06 +0000 (0:00:00.985) 0:00:16.645 **** 2025-09-13 00:46:11.855248 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-09-13 00:46:11.855260 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-09-13 00:46:11.855273 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-09-13 00:46:11.855286 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-09-13 00:46:11.855299 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-09-13 00:46:11.855312 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-09-13 00:46:11.855324 | orchestrator | 2025-09-13 00:46:11.855336 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-09-13 00:46:11.855348 | orchestrator | Saturday 13 September 2025 00:46:08 +0000 (0:00:02.183) 0:00:18.828 **** 2025-09-13 00:46:11.855361 | orchestrator | ok: [testbed-manager] 2025-09-13 00:46:11.855373 | orchestrator | 2025-09-13 00:46:11.855385 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-09-13 00:46:11.855397 | orchestrator | Saturday 13 September 2025 00:46:10 +0000 (0:00:01.386) 0:00:20.215 **** 2025-09-13 00:46:11.855409 | orchestrator | changed: [testbed-manager] 2025-09-13 00:46:11.855421 | orchestrator | 2025-09-13 00:46:11.855434 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:46:11.855446 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-13 00:46:11.855457 | orchestrator | 2025-09-13 00:46:11.855467 | orchestrator | 2025-09-13 00:46:11.855478 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:46:11.855489 | orchestrator | Saturday 13 September 2025 00:46:11 +0000 (0:00:01.393) 0:00:21.609 **** 2025-09-13 00:46:11.855500 | orchestrator | =============================================================================== 2025-09-13 00:46:11.855511 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.78s 2025-09-13 00:46:11.855522 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.18s 2025-09-13 00:46:11.855532 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.39s 2025-09-13 00:46:11.855543 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.39s 2025-09-13 00:46:11.855572 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.28s 2025-09-13 00:46:11.855584 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.16s 2025-09-13 00:46:11.855594 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.04s 2025-09-13 00:46:11.855605 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.99s 2025-09-13 00:46:11.855616 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 0.99s 2025-09-13 00:46:11.855626 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.84s 2025-09-13 00:46:11.855637 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.21s 2025-09-13 00:46:11.855648 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.16s 2025-09-13 00:46:12.137897 | orchestrator | 2025-09-13 00:46:12.139444 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat Sep 13 00:46:12 UTC 2025 2025-09-13 00:46:12.139495 | orchestrator | 2025-09-13 00:46:14.061002 | orchestrator | 2025-09-13 00:46:14 | INFO  | Collection nutshell is prepared for execution 2025-09-13 00:46:14.061105 | orchestrator | 2025-09-13 00:46:14 | INFO  | D [0] - dotfiles 2025-09-13 00:46:24.067913 | orchestrator | 2025-09-13 00:46:24 | INFO  | D [0] - homer 2025-09-13 00:46:24.068042 | orchestrator | 2025-09-13 00:46:24 | INFO  | D [0] - netdata 2025-09-13 00:46:24.068054 | orchestrator | 2025-09-13 00:46:24 | INFO  | D [0] - openstackclient 2025-09-13 00:46:24.069463 | orchestrator | 2025-09-13 00:46:24 | INFO  | D [0] - phpmyadmin 2025-09-13 00:46:24.069484 | orchestrator | 2025-09-13 00:46:24 | INFO  | A [0] - common 2025-09-13 00:46:24.073608 | orchestrator | 2025-09-13 00:46:24 | INFO  | A [1] -- loadbalancer 2025-09-13 00:46:24.074086 | orchestrator | 2025-09-13 00:46:24 | INFO  | D [2] --- opensearch 2025-09-13 00:46:24.074412 | orchestrator | 2025-09-13 00:46:24 | INFO  | A [2] --- mariadb-ng 2025-09-13 00:46:24.074796 | orchestrator | 2025-09-13 00:46:24 | INFO  | D [3] ---- horizon 2025-09-13 00:46:24.075305 | orchestrator | 2025-09-13 00:46:24 | INFO  | A [3] ---- keystone 2025-09-13 00:46:24.075690 | orchestrator | 2025-09-13 00:46:24 | INFO  | A [4] ----- neutron 2025-09-13 00:46:24.076192 | orchestrator | 2025-09-13 00:46:24 | INFO  | D [5] ------ wait-for-nova 2025-09-13 00:46:24.077256 | orchestrator | 2025-09-13 00:46:24 | INFO  | A [5] ------ octavia 2025-09-13 00:46:24.077570 | orchestrator | 2025-09-13 00:46:24 | INFO  | D [4] ----- barbican 2025-09-13 00:46:24.077908 | orchestrator | 2025-09-13 00:46:24 | INFO  | D [4] ----- designate 2025-09-13 00:46:24.078308 | orchestrator | 2025-09-13 00:46:24 | INFO  | D [4] ----- ironic 2025-09-13 00:46:24.078534 | orchestrator | 2025-09-13 00:46:24 | INFO  | D [4] ----- placement 2025-09-13 00:46:24.078872 | orchestrator | 2025-09-13 00:46:24 | INFO  | D [4] ----- magnum 2025-09-13 00:46:24.079845 | orchestrator | 2025-09-13 00:46:24 | INFO  | A [1] -- openvswitch 2025-09-13 00:46:24.080048 | orchestrator | 2025-09-13 00:46:24 | INFO  | D [2] --- ovn 2025-09-13 00:46:24.080370 | orchestrator | 2025-09-13 00:46:24 | INFO  | D [1] -- memcached 2025-09-13 00:46:24.080671 | orchestrator | 2025-09-13 00:46:24 | INFO  | D [1] -- redis 2025-09-13 00:46:24.080942 | orchestrator | 2025-09-13 00:46:24 | INFO  | D [1] -- rabbitmq-ng 2025-09-13 00:46:24.081429 | orchestrator | 2025-09-13 00:46:24 | INFO  | A [0] - kubernetes 2025-09-13 00:46:24.083500 | orchestrator | 2025-09-13 00:46:24 | INFO  | D [1] -- kubeconfig 2025-09-13 00:46:24.083651 | orchestrator | 2025-09-13 00:46:24 | INFO  | A [1] -- copy-kubeconfig 2025-09-13 00:46:24.084026 | orchestrator | 2025-09-13 00:46:24 | INFO  | A [0] - ceph 2025-09-13 00:46:24.085854 | orchestrator | 2025-09-13 00:46:24 | INFO  | A [1] -- ceph-pools 2025-09-13 00:46:24.086080 | orchestrator | 2025-09-13 00:46:24 | INFO  | A [2] --- copy-ceph-keys 2025-09-13 00:46:24.086303 | orchestrator | 2025-09-13 00:46:24 | INFO  | A [3] ---- cephclient 2025-09-13 00:46:24.086522 | orchestrator | 2025-09-13 00:46:24 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-09-13 00:46:24.086683 | orchestrator | 2025-09-13 00:46:24 | INFO  | A [4] ----- wait-for-keystone 2025-09-13 00:46:24.087031 | orchestrator | 2025-09-13 00:46:24 | INFO  | D [5] ------ kolla-ceph-rgw 2025-09-13 00:46:24.087227 | orchestrator | 2025-09-13 00:46:24 | INFO  | D [5] ------ glance 2025-09-13 00:46:24.087384 | orchestrator | 2025-09-13 00:46:24 | INFO  | D [5] ------ cinder 2025-09-13 00:46:24.087603 | orchestrator | 2025-09-13 00:46:24 | INFO  | D [5] ------ nova 2025-09-13 00:46:24.087976 | orchestrator | 2025-09-13 00:46:24 | INFO  | A [4] ----- prometheus 2025-09-13 00:46:24.088202 | orchestrator | 2025-09-13 00:46:24 | INFO  | D [5] ------ grafana 2025-09-13 00:46:24.284704 | orchestrator | 2025-09-13 00:46:24 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-09-13 00:46:24.284794 | orchestrator | 2025-09-13 00:46:24 | INFO  | Tasks are running in the background 2025-09-13 00:46:26.796591 | orchestrator | 2025-09-13 00:46:26 | INFO  | No task IDs specified, wait for all currently running tasks 2025-09-13 00:46:28.912607 | orchestrator | 2025-09-13 00:46:28 | INFO  | Task aa66a548-396e-42cc-9bd8-8a9503e13ad4 is in state STARTED 2025-09-13 00:46:28.912857 | orchestrator | 2025-09-13 00:46:28 | INFO  | Task 97410778-927f-4e7b-afab-8e8af2a110d2 is in state STARTED 2025-09-13 00:46:28.913532 | orchestrator | 2025-09-13 00:46:28 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:46:28.913867 | orchestrator | 2025-09-13 00:46:28 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:46:28.914380 | orchestrator | 2025-09-13 00:46:28 | INFO  | Task 5e0856eb-b349-4b3a-8402-baaecc3bc5b0 is in state STARTED 2025-09-13 00:46:28.917530 | orchestrator | 2025-09-13 00:46:28 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:46:28.918211 | orchestrator | 2025-09-13 00:46:28 | INFO  | Task 1a42d891-f52c-483a-a085-c064c3ddc029 is in state STARTED 2025-09-13 00:46:28.918236 | orchestrator | 2025-09-13 00:46:28 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:46:31.998692 | orchestrator | 2025-09-13 00:46:31 | INFO  | Task aa66a548-396e-42cc-9bd8-8a9503e13ad4 is in state STARTED 2025-09-13 00:46:31.998784 | orchestrator | 2025-09-13 00:46:31 | INFO  | Task 97410778-927f-4e7b-afab-8e8af2a110d2 is in state STARTED 2025-09-13 00:46:31.998799 | orchestrator | 2025-09-13 00:46:31 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:46:31.998810 | orchestrator | 2025-09-13 00:46:31 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:46:31.998821 | orchestrator | 2025-09-13 00:46:31 | INFO  | Task 5e0856eb-b349-4b3a-8402-baaecc3bc5b0 is in state STARTED 2025-09-13 00:46:31.998831 | orchestrator | 2025-09-13 00:46:31 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:46:31.998842 | orchestrator | 2025-09-13 00:46:31 | INFO  | Task 1a42d891-f52c-483a-a085-c064c3ddc029 is in state STARTED 2025-09-13 00:46:31.998853 | orchestrator | 2025-09-13 00:46:31 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:46:35.057639 | orchestrator | 2025-09-13 00:46:35 | INFO  | Task aa66a548-396e-42cc-9bd8-8a9503e13ad4 is in state STARTED 2025-09-13 00:46:35.057842 | orchestrator | 2025-09-13 00:46:35 | INFO  | Task 97410778-927f-4e7b-afab-8e8af2a110d2 is in state STARTED 2025-09-13 00:46:35.058530 | orchestrator | 2025-09-13 00:46:35 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:46:35.059188 | orchestrator | 2025-09-13 00:46:35 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:46:35.059817 | orchestrator | 2025-09-13 00:46:35 | INFO  | Task 5e0856eb-b349-4b3a-8402-baaecc3bc5b0 is in state STARTED 2025-09-13 00:46:35.060678 | orchestrator | 2025-09-13 00:46:35 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:46:35.061340 | orchestrator | 2025-09-13 00:46:35 | INFO  | Task 1a42d891-f52c-483a-a085-c064c3ddc029 is in state STARTED 2025-09-13 00:46:35.061518 | orchestrator | 2025-09-13 00:46:35 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:46:38.111819 | orchestrator | 2025-09-13 00:46:38 | INFO  | Task aa66a548-396e-42cc-9bd8-8a9503e13ad4 is in state STARTED 2025-09-13 00:46:38.111934 | orchestrator | 2025-09-13 00:46:38 | INFO  | Task 97410778-927f-4e7b-afab-8e8af2a110d2 is in state STARTED 2025-09-13 00:46:38.111950 | orchestrator | 2025-09-13 00:46:38 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:46:38.111993 | orchestrator | 2025-09-13 00:46:38 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:46:38.114479 | orchestrator | 2025-09-13 00:46:38 | INFO  | Task 5e0856eb-b349-4b3a-8402-baaecc3bc5b0 is in state STARTED 2025-09-13 00:46:38.114507 | orchestrator | 2025-09-13 00:46:38 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:46:38.114519 | orchestrator | 2025-09-13 00:46:38 | INFO  | Task 1a42d891-f52c-483a-a085-c064c3ddc029 is in state STARTED 2025-09-13 00:46:38.114530 | orchestrator | 2025-09-13 00:46:38 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:46:41.206181 | orchestrator | 2025-09-13 00:46:41 | INFO  | Task aa66a548-396e-42cc-9bd8-8a9503e13ad4 is in state STARTED 2025-09-13 00:46:41.206934 | orchestrator | 2025-09-13 00:46:41 | INFO  | Task 97410778-927f-4e7b-afab-8e8af2a110d2 is in state STARTED 2025-09-13 00:46:41.208213 | orchestrator | 2025-09-13 00:46:41 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:46:41.208956 | orchestrator | 2025-09-13 00:46:41 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:46:41.211854 | orchestrator | 2025-09-13 00:46:41 | INFO  | Task 5e0856eb-b349-4b3a-8402-baaecc3bc5b0 is in state STARTED 2025-09-13 00:46:41.213146 | orchestrator | 2025-09-13 00:46:41 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:46:41.216406 | orchestrator | 2025-09-13 00:46:41 | INFO  | Task 1a42d891-f52c-483a-a085-c064c3ddc029 is in state STARTED 2025-09-13 00:46:41.216497 | orchestrator | 2025-09-13 00:46:41 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:46:44.468613 | orchestrator | 2025-09-13 00:46:44 | INFO  | Task aa66a548-396e-42cc-9bd8-8a9503e13ad4 is in state STARTED 2025-09-13 00:46:44.468747 | orchestrator | 2025-09-13 00:46:44 | INFO  | Task 97410778-927f-4e7b-afab-8e8af2a110d2 is in state STARTED 2025-09-13 00:46:44.468765 | orchestrator | 2025-09-13 00:46:44 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:46:44.468777 | orchestrator | 2025-09-13 00:46:44 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:46:44.468788 | orchestrator | 2025-09-13 00:46:44 | INFO  | Task 5e0856eb-b349-4b3a-8402-baaecc3bc5b0 is in state STARTED 2025-09-13 00:46:44.468800 | orchestrator | 2025-09-13 00:46:44 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:46:44.468811 | orchestrator | 2025-09-13 00:46:44 | INFO  | Task 1a42d891-f52c-483a-a085-c064c3ddc029 is in state STARTED 2025-09-13 00:46:44.468822 | orchestrator | 2025-09-13 00:46:44 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:46:47.589691 | orchestrator | 2025-09-13 00:46:47 | INFO  | Task aa66a548-396e-42cc-9bd8-8a9503e13ad4 is in state STARTED 2025-09-13 00:46:47.599751 | orchestrator | 2025-09-13 00:46:47 | INFO  | Task 97410778-927f-4e7b-afab-8e8af2a110d2 is in state STARTED 2025-09-13 00:46:47.611773 | orchestrator | 2025-09-13 00:46:47 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:46:47.621517 | orchestrator | 2025-09-13 00:46:47 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:46:47.629561 | orchestrator | 2025-09-13 00:46:47 | INFO  | Task 5e0856eb-b349-4b3a-8402-baaecc3bc5b0 is in state STARTED 2025-09-13 00:46:47.637583 | orchestrator | 2025-09-13 00:46:47 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:46:47.645389 | orchestrator | 2025-09-13 00:46:47 | INFO  | Task 1a42d891-f52c-483a-a085-c064c3ddc029 is in state STARTED 2025-09-13 00:46:47.648135 | orchestrator | 2025-09-13 00:46:47 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:46:50.763777 | orchestrator | 2025-09-13 00:46:50 | INFO  | Task aa66a548-396e-42cc-9bd8-8a9503e13ad4 is in state STARTED 2025-09-13 00:46:50.769303 | orchestrator | 2025-09-13 00:46:50 | INFO  | Task 97410778-927f-4e7b-afab-8e8af2a110d2 is in state STARTED 2025-09-13 00:46:50.769926 | orchestrator | 2025-09-13 00:46:50 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:46:50.772473 | orchestrator | 2025-09-13 00:46:50 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:46:50.776935 | orchestrator | 2025-09-13 00:46:50 | INFO  | Task 5e0856eb-b349-4b3a-8402-baaecc3bc5b0 is in state STARTED 2025-09-13 00:46:50.779626 | orchestrator | 2025-09-13 00:46:50 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:46:50.780772 | orchestrator | 2025-09-13 00:46:50 | INFO  | Task 1a42d891-f52c-483a-a085-c064c3ddc029 is in state STARTED 2025-09-13 00:46:50.781125 | orchestrator | 2025-09-13 00:46:50 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:46:53.976945 | orchestrator | 2025-09-13 00:46:53 | INFO  | Task aa66a548-396e-42cc-9bd8-8a9503e13ad4 is in state STARTED 2025-09-13 00:46:53.977101 | orchestrator | 2025-09-13 00:46:53 | INFO  | Task 97410778-927f-4e7b-afab-8e8af2a110d2 is in state STARTED 2025-09-13 00:46:53.977118 | orchestrator | 2025-09-13 00:46:53 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:46:53.977130 | orchestrator | 2025-09-13 00:46:53 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:46:53.977141 | orchestrator | 2025-09-13 00:46:53 | INFO  | Task 5e0856eb-b349-4b3a-8402-baaecc3bc5b0 is in state STARTED 2025-09-13 00:46:53.977152 | orchestrator | 2025-09-13 00:46:53 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:46:53.977163 | orchestrator | 2025-09-13 00:46:53 | INFO  | Task 1a42d891-f52c-483a-a085-c064c3ddc029 is in state STARTED 2025-09-13 00:46:53.977175 | orchestrator | 2025-09-13 00:46:53 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:46:57.239641 | orchestrator | 2025-09-13 00:46:57.239733 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-09-13 00:46:57.239747 | orchestrator | 2025-09-13 00:46:57.239759 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-09-13 00:46:57.239770 | orchestrator | Saturday 13 September 2025 00:46:37 +0000 (0:00:00.906) 0:00:00.906 **** 2025-09-13 00:46:57.239782 | orchestrator | changed: [testbed-manager] 2025-09-13 00:46:57.239793 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:46:57.239804 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:46:57.239815 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:46:57.239826 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:46:57.239837 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:46:57.239848 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:46:57.239859 | orchestrator | 2025-09-13 00:46:57.239877 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-09-13 00:46:57.239888 | orchestrator | Saturday 13 September 2025 00:46:44 +0000 (0:00:06.856) 0:00:07.762 **** 2025-09-13 00:46:57.239922 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-13 00:46:57.239934 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-13 00:46:57.239944 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-13 00:46:57.239955 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-13 00:46:57.239965 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-13 00:46:57.240013 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-13 00:46:57.240035 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-13 00:46:57.240053 | orchestrator | 2025-09-13 00:46:57.240071 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-09-13 00:46:57.240087 | orchestrator | Saturday 13 September 2025 00:46:46 +0000 (0:00:02.599) 0:00:10.361 **** 2025-09-13 00:46:57.240102 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-13 00:46:45.022652', 'end': '2025-09-13 00:46:45.030463', 'delta': '0:00:00.007811', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-13 00:46:57.240124 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-13 00:46:44.843638', 'end': '2025-09-13 00:46:44.853742', 'delta': '0:00:00.010104', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-13 00:46:57.240136 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-13 00:46:44.877408', 'end': '2025-09-13 00:46:44.884358', 'delta': '0:00:00.006950', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-13 00:46:57.240167 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-13 00:46:44.874833', 'end': '2025-09-13 00:46:44.882010', 'delta': '0:00:00.007177', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-13 00:46:57.240201 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-13 00:46:45.483903', 'end': '2025-09-13 00:46:45.494377', 'delta': '0:00:00.010474', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-13 00:46:57.240216 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-13 00:46:45.818819', 'end': '2025-09-13 00:46:45.827074', 'delta': '0:00:00.008255', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-13 00:46:57.240229 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-13 00:46:45.775669', 'end': '2025-09-13 00:46:45.786957', 'delta': '0:00:00.011288', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-13 00:46:57.240242 | orchestrator | 2025-09-13 00:46:57.240255 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-09-13 00:46:57.240267 | orchestrator | Saturday 13 September 2025 00:46:49 +0000 (0:00:02.924) 0:00:13.285 **** 2025-09-13 00:46:57.240280 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-13 00:46:57.240292 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-13 00:46:57.240304 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-13 00:46:57.240317 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-13 00:46:57.240329 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-13 00:46:57.240340 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-13 00:46:57.240352 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-13 00:46:57.240364 | orchestrator | 2025-09-13 00:46:57.240376 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-09-13 00:46:57.240388 | orchestrator | Saturday 13 September 2025 00:46:52 +0000 (0:00:03.271) 0:00:16.556 **** 2025-09-13 00:46:57.240401 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-09-13 00:46:57.240413 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-09-13 00:46:57.240425 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-09-13 00:46:57.240437 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-09-13 00:46:57.240456 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-09-13 00:46:57.240469 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-09-13 00:46:57.240482 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-09-13 00:46:57.240494 | orchestrator | 2025-09-13 00:46:57.240507 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:46:57.240527 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:46:57.240539 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:46:57.240550 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:46:57.240566 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:46:57.240577 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:46:57.240588 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:46:57.240599 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:46:57.240609 | orchestrator | 2025-09-13 00:46:57.240620 | orchestrator | 2025-09-13 00:46:57.240631 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:46:57.240642 | orchestrator | Saturday 13 September 2025 00:46:56 +0000 (0:00:03.599) 0:00:20.156 **** 2025-09-13 00:46:57.240653 | orchestrator | =============================================================================== 2025-09-13 00:46:57.240664 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 6.86s 2025-09-13 00:46:57.240674 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.60s 2025-09-13 00:46:57.240685 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 3.27s 2025-09-13 00:46:57.240696 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.92s 2025-09-13 00:46:57.240706 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.60s 2025-09-13 00:46:57.240717 | orchestrator | 2025-09-13 00:46:57 | INFO  | Task aa66a548-396e-42cc-9bd8-8a9503e13ad4 is in state SUCCESS 2025-09-13 00:46:57.240728 | orchestrator | 2025-09-13 00:46:57 | INFO  | Task 97410778-927f-4e7b-afab-8e8af2a110d2 is in state STARTED 2025-09-13 00:46:57.240739 | orchestrator | 2025-09-13 00:46:57 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:46:57.240749 | orchestrator | 2025-09-13 00:46:57 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:46:57.240760 | orchestrator | 2025-09-13 00:46:57 | INFO  | Task 5e0856eb-b349-4b3a-8402-baaecc3bc5b0 is in state STARTED 2025-09-13 00:46:57.240771 | orchestrator | 2025-09-13 00:46:57 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:46:57.240781 | orchestrator | 2025-09-13 00:46:57 | INFO  | Task 1a42d891-f52c-483a-a085-c064c3ddc029 is in state STARTED 2025-09-13 00:46:57.240792 | orchestrator | 2025-09-13 00:46:57 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:47:00.270856 | orchestrator | 2025-09-13 00:47:00 | INFO  | Task 97410778-927f-4e7b-afab-8e8af2a110d2 is in state STARTED 2025-09-13 00:47:00.275605 | orchestrator | 2025-09-13 00:47:00 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:47:00.277188 | orchestrator | 2025-09-13 00:47:00 | INFO  | Task 75e68ad3-c3c1-47a7-b4d6-7c5aae92c729 is in state STARTED 2025-09-13 00:47:00.281672 | orchestrator | 2025-09-13 00:47:00 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:47:00.287887 | orchestrator | 2025-09-13 00:47:00 | INFO  | Task 5e0856eb-b349-4b3a-8402-baaecc3bc5b0 is in state STARTED 2025-09-13 00:47:00.287938 | orchestrator | 2025-09-13 00:47:00 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:47:00.287950 | orchestrator | 2025-09-13 00:47:00 | INFO  | Task 1a42d891-f52c-483a-a085-c064c3ddc029 is in state STARTED 2025-09-13 00:47:00.287962 | orchestrator | 2025-09-13 00:47:00 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:47:03.335754 | orchestrator | 2025-09-13 00:47:03 | INFO  | Task 97410778-927f-4e7b-afab-8e8af2a110d2 is in state STARTED 2025-09-13 00:47:03.335850 | orchestrator | 2025-09-13 00:47:03 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:47:03.338274 | orchestrator | 2025-09-13 00:47:03 | INFO  | Task 75e68ad3-c3c1-47a7-b4d6-7c5aae92c729 is in state STARTED 2025-09-13 00:47:03.339164 | orchestrator | 2025-09-13 00:47:03 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:47:03.339859 | orchestrator | 2025-09-13 00:47:03 | INFO  | Task 5e0856eb-b349-4b3a-8402-baaecc3bc5b0 is in state STARTED 2025-09-13 00:47:03.341744 | orchestrator | 2025-09-13 00:47:03 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:47:03.342497 | orchestrator | 2025-09-13 00:47:03 | INFO  | Task 1a42d891-f52c-483a-a085-c064c3ddc029 is in state STARTED 2025-09-13 00:47:03.342527 | orchestrator | 2025-09-13 00:47:03 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:47:06.413929 | orchestrator | 2025-09-13 00:47:06 | INFO  | Task 97410778-927f-4e7b-afab-8e8af2a110d2 is in state STARTED 2025-09-13 00:47:06.414141 | orchestrator | 2025-09-13 00:47:06 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:47:06.414159 | orchestrator | 2025-09-13 00:47:06 | INFO  | Task 75e68ad3-c3c1-47a7-b4d6-7c5aae92c729 is in state STARTED 2025-09-13 00:47:06.414171 | orchestrator | 2025-09-13 00:47:06 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:47:06.414182 | orchestrator | 2025-09-13 00:47:06 | INFO  | Task 5e0856eb-b349-4b3a-8402-baaecc3bc5b0 is in state STARTED 2025-09-13 00:47:06.414193 | orchestrator | 2025-09-13 00:47:06 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:47:06.414204 | orchestrator | 2025-09-13 00:47:06 | INFO  | Task 1a42d891-f52c-483a-a085-c064c3ddc029 is in state STARTED 2025-09-13 00:47:06.414215 | orchestrator | 2025-09-13 00:47:06 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:47:09.463635 | orchestrator | 2025-09-13 00:47:09 | INFO  | Task 97410778-927f-4e7b-afab-8e8af2a110d2 is in state STARTED 2025-09-13 00:47:09.464866 | orchestrator | 2025-09-13 00:47:09 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:47:09.467178 | orchestrator | 2025-09-13 00:47:09 | INFO  | Task 75e68ad3-c3c1-47a7-b4d6-7c5aae92c729 is in state STARTED 2025-09-13 00:47:09.468789 | orchestrator | 2025-09-13 00:47:09 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:47:09.471297 | orchestrator | 2025-09-13 00:47:09 | INFO  | Task 5e0856eb-b349-4b3a-8402-baaecc3bc5b0 is in state STARTED 2025-09-13 00:47:09.472667 | orchestrator | 2025-09-13 00:47:09 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:47:09.472729 | orchestrator | 2025-09-13 00:47:09 | INFO  | Task 1a42d891-f52c-483a-a085-c064c3ddc029 is in state STARTED 2025-09-13 00:47:09.472743 | orchestrator | 2025-09-13 00:47:09 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:47:12.594759 | orchestrator | 2025-09-13 00:47:12 | INFO  | Task 97410778-927f-4e7b-afab-8e8af2a110d2 is in state STARTED 2025-09-13 00:47:12.594864 | orchestrator | 2025-09-13 00:47:12 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:47:12.594878 | orchestrator | 2025-09-13 00:47:12 | INFO  | Task 75e68ad3-c3c1-47a7-b4d6-7c5aae92c729 is in state STARTED 2025-09-13 00:47:12.594890 | orchestrator | 2025-09-13 00:47:12 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:47:12.594900 | orchestrator | 2025-09-13 00:47:12 | INFO  | Task 5e0856eb-b349-4b3a-8402-baaecc3bc5b0 is in state STARTED 2025-09-13 00:47:12.594911 | orchestrator | 2025-09-13 00:47:12 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:47:12.594922 | orchestrator | 2025-09-13 00:47:12 | INFO  | Task 1a42d891-f52c-483a-a085-c064c3ddc029 is in state STARTED 2025-09-13 00:47:12.594933 | orchestrator | 2025-09-13 00:47:12 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:47:15.653752 | orchestrator | 2025-09-13 00:47:15 | INFO  | Task 97410778-927f-4e7b-afab-8e8af2a110d2 is in state STARTED 2025-09-13 00:47:15.654210 | orchestrator | 2025-09-13 00:47:15 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:47:15.657188 | orchestrator | 2025-09-13 00:47:15 | INFO  | Task 75e68ad3-c3c1-47a7-b4d6-7c5aae92c729 is in state STARTED 2025-09-13 00:47:15.659484 | orchestrator | 2025-09-13 00:47:15 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:47:15.666508 | orchestrator | 2025-09-13 00:47:15 | INFO  | Task 5e0856eb-b349-4b3a-8402-baaecc3bc5b0 is in state STARTED 2025-09-13 00:47:15.674135 | orchestrator | 2025-09-13 00:47:15 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:47:15.674164 | orchestrator | 2025-09-13 00:47:15 | INFO  | Task 1a42d891-f52c-483a-a085-c064c3ddc029 is in state STARTED 2025-09-13 00:47:15.674177 | orchestrator | 2025-09-13 00:47:15 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:47:18.747814 | orchestrator | 2025-09-13 00:47:18 | INFO  | Task 97410778-927f-4e7b-afab-8e8af2a110d2 is in state STARTED 2025-09-13 00:47:18.755400 | orchestrator | 2025-09-13 00:47:18 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:47:18.757695 | orchestrator | 2025-09-13 00:47:18 | INFO  | Task 75e68ad3-c3c1-47a7-b4d6-7c5aae92c729 is in state STARTED 2025-09-13 00:47:18.758659 | orchestrator | 2025-09-13 00:47:18 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:47:18.764811 | orchestrator | 2025-09-13 00:47:18 | INFO  | Task 5e0856eb-b349-4b3a-8402-baaecc3bc5b0 is in state STARTED 2025-09-13 00:47:18.764835 | orchestrator | 2025-09-13 00:47:18 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:47:18.766607 | orchestrator | 2025-09-13 00:47:18 | INFO  | Task 1a42d891-f52c-483a-a085-c064c3ddc029 is in state STARTED 2025-09-13 00:47:18.769212 | orchestrator | 2025-09-13 00:47:18 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:47:21.881925 | orchestrator | 2025-09-13 00:47:21 | INFO  | Task 97410778-927f-4e7b-afab-8e8af2a110d2 is in state STARTED 2025-09-13 00:47:21.882139 | orchestrator | 2025-09-13 00:47:21 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:47:21.882190 | orchestrator | 2025-09-13 00:47:21 | INFO  | Task 75e68ad3-c3c1-47a7-b4d6-7c5aae92c729 is in state STARTED 2025-09-13 00:47:21.882202 | orchestrator | 2025-09-13 00:47:21 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:47:21.882213 | orchestrator | 2025-09-13 00:47:21 | INFO  | Task 5e0856eb-b349-4b3a-8402-baaecc3bc5b0 is in state STARTED 2025-09-13 00:47:21.882224 | orchestrator | 2025-09-13 00:47:21 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:47:21.882251 | orchestrator | 2025-09-13 00:47:21 | INFO  | Task 1a42d891-f52c-483a-a085-c064c3ddc029 is in state STARTED 2025-09-13 00:47:21.882263 | orchestrator | 2025-09-13 00:47:21 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:47:24.973882 | orchestrator | 2025-09-13 00:47:24 | INFO  | Task 97410778-927f-4e7b-afab-8e8af2a110d2 is in state STARTED 2025-09-13 00:47:24.973983 | orchestrator | 2025-09-13 00:47:24 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:47:24.975376 | orchestrator | 2025-09-13 00:47:24 | INFO  | Task 75e68ad3-c3c1-47a7-b4d6-7c5aae92c729 is in state STARTED 2025-09-13 00:47:24.975404 | orchestrator | 2025-09-13 00:47:24 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:47:24.976670 | orchestrator | 2025-09-13 00:47:24 | INFO  | Task 5e0856eb-b349-4b3a-8402-baaecc3bc5b0 is in state STARTED 2025-09-13 00:47:24.977291 | orchestrator | 2025-09-13 00:47:24 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:47:24.977470 | orchestrator | 2025-09-13 00:47:24 | INFO  | Task 1a42d891-f52c-483a-a085-c064c3ddc029 is in state STARTED 2025-09-13 00:47:24.977491 | orchestrator | 2025-09-13 00:47:24 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:47:28.047429 | orchestrator | 2025-09-13 00:47:28 | INFO  | Task 97410778-927f-4e7b-afab-8e8af2a110d2 is in state STARTED 2025-09-13 00:47:28.048611 | orchestrator | 2025-09-13 00:47:28 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:47:28.048771 | orchestrator | 2025-09-13 00:47:28 | INFO  | Task 75e68ad3-c3c1-47a7-b4d6-7c5aae92c729 is in state STARTED 2025-09-13 00:47:28.049847 | orchestrator | 2025-09-13 00:47:28 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:47:28.049877 | orchestrator | 2025-09-13 00:47:28 | INFO  | Task 5e0856eb-b349-4b3a-8402-baaecc3bc5b0 is in state SUCCESS 2025-09-13 00:47:28.059460 | orchestrator | 2025-09-13 00:47:28 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:47:28.082633 | orchestrator | 2025-09-13 00:47:28 | INFO  | Task 1a42d891-f52c-483a-a085-c064c3ddc029 is in state STARTED 2025-09-13 00:47:28.082672 | orchestrator | 2025-09-13 00:47:28 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:47:31.147837 | orchestrator | 2025-09-13 00:47:31 | INFO  | Task 97410778-927f-4e7b-afab-8e8af2a110d2 is in state STARTED 2025-09-13 00:47:31.148109 | orchestrator | 2025-09-13 00:47:31 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:47:31.148960 | orchestrator | 2025-09-13 00:47:31 | INFO  | Task 75e68ad3-c3c1-47a7-b4d6-7c5aae92c729 is in state STARTED 2025-09-13 00:47:31.149718 | orchestrator | 2025-09-13 00:47:31 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:47:31.153655 | orchestrator | 2025-09-13 00:47:31 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:47:31.154970 | orchestrator | 2025-09-13 00:47:31 | INFO  | Task 1a42d891-f52c-483a-a085-c064c3ddc029 is in state STARTED 2025-09-13 00:47:31.155043 | orchestrator | 2025-09-13 00:47:31 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:47:34.250403 | orchestrator | 2025-09-13 00:47:34 | INFO  | Task 97410778-927f-4e7b-afab-8e8af2a110d2 is in state SUCCESS 2025-09-13 00:47:34.255901 | orchestrator | 2025-09-13 00:47:34 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:47:34.258777 | orchestrator | 2025-09-13 00:47:34 | INFO  | Task 75e68ad3-c3c1-47a7-b4d6-7c5aae92c729 is in state STARTED 2025-09-13 00:47:34.259374 | orchestrator | 2025-09-13 00:47:34 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:47:34.260475 | orchestrator | 2025-09-13 00:47:34 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:47:34.260910 | orchestrator | 2025-09-13 00:47:34 | INFO  | Task 1a42d891-f52c-483a-a085-c064c3ddc029 is in state STARTED 2025-09-13 00:47:34.260934 | orchestrator | 2025-09-13 00:47:34 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:47:37.293747 | orchestrator | 2025-09-13 00:47:37 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:47:37.293845 | orchestrator | 2025-09-13 00:47:37 | INFO  | Task 75e68ad3-c3c1-47a7-b4d6-7c5aae92c729 is in state STARTED 2025-09-13 00:47:37.293859 | orchestrator | 2025-09-13 00:47:37 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:47:37.304397 | orchestrator | 2025-09-13 00:47:37 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:47:37.304429 | orchestrator | 2025-09-13 00:47:37 | INFO  | Task 1a42d891-f52c-483a-a085-c064c3ddc029 is in state STARTED 2025-09-13 00:47:37.304443 | orchestrator | 2025-09-13 00:47:37 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:47:40.364181 | orchestrator | 2025-09-13 00:47:40 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:47:40.366002 | orchestrator | 2025-09-13 00:47:40 | INFO  | Task 75e68ad3-c3c1-47a7-b4d6-7c5aae92c729 is in state STARTED 2025-09-13 00:47:40.369300 | orchestrator | 2025-09-13 00:47:40 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:47:40.379110 | orchestrator | 2025-09-13 00:47:40 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:47:40.383310 | orchestrator | 2025-09-13 00:47:40 | INFO  | Task 1a42d891-f52c-483a-a085-c064c3ddc029 is in state STARTED 2025-09-13 00:47:40.384109 | orchestrator | 2025-09-13 00:47:40 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:47:43.440380 | orchestrator | 2025-09-13 00:47:43 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:47:43.440480 | orchestrator | 2025-09-13 00:47:43 | INFO  | Task 75e68ad3-c3c1-47a7-b4d6-7c5aae92c729 is in state STARTED 2025-09-13 00:47:43.440494 | orchestrator | 2025-09-13 00:47:43 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:47:43.440505 | orchestrator | 2025-09-13 00:47:43 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:47:43.440515 | orchestrator | 2025-09-13 00:47:43 | INFO  | Task 1a42d891-f52c-483a-a085-c064c3ddc029 is in state STARTED 2025-09-13 00:47:43.440525 | orchestrator | 2025-09-13 00:47:43 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:47:46.478206 | orchestrator | 2025-09-13 00:47:46 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:47:46.480257 | orchestrator | 2025-09-13 00:47:46 | INFO  | Task 75e68ad3-c3c1-47a7-b4d6-7c5aae92c729 is in state STARTED 2025-09-13 00:47:46.482290 | orchestrator | 2025-09-13 00:47:46 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:47:46.483495 | orchestrator | 2025-09-13 00:47:46 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:47:46.484787 | orchestrator | 2025-09-13 00:47:46 | INFO  | Task 1a42d891-f52c-483a-a085-c064c3ddc029 is in state STARTED 2025-09-13 00:47:46.484812 | orchestrator | 2025-09-13 00:47:46 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:47:49.535922 | orchestrator | 2025-09-13 00:47:49 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:47:49.536697 | orchestrator | 2025-09-13 00:47:49 | INFO  | Task 75e68ad3-c3c1-47a7-b4d6-7c5aae92c729 is in state STARTED 2025-09-13 00:47:49.538506 | orchestrator | 2025-09-13 00:47:49 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:47:49.540761 | orchestrator | 2025-09-13 00:47:49 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:47:49.544975 | orchestrator | 2025-09-13 00:47:49 | INFO  | Task 1a42d891-f52c-483a-a085-c064c3ddc029 is in state STARTED 2025-09-13 00:47:49.544999 | orchestrator | 2025-09-13 00:47:49 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:47:52.578867 | orchestrator | 2025-09-13 00:47:52 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:47:52.581294 | orchestrator | 2025-09-13 00:47:52 | INFO  | Task 75e68ad3-c3c1-47a7-b4d6-7c5aae92c729 is in state STARTED 2025-09-13 00:47:52.582550 | orchestrator | 2025-09-13 00:47:52 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:47:52.586425 | orchestrator | 2025-09-13 00:47:52 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:47:52.589287 | orchestrator | 2025-09-13 00:47:52 | INFO  | Task 1a42d891-f52c-483a-a085-c064c3ddc029 is in state SUCCESS 2025-09-13 00:47:52.589310 | orchestrator | 2025-09-13 00:47:52 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:47:52.591154 | orchestrator | 2025-09-13 00:47:52.591239 | orchestrator | 2025-09-13 00:47:52.591256 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-09-13 00:47:52.591268 | orchestrator | 2025-09-13 00:47:52.591280 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-09-13 00:47:52.591292 | orchestrator | Saturday 13 September 2025 00:46:39 +0000 (0:00:00.853) 0:00:00.853 **** 2025-09-13 00:47:52.591303 | orchestrator | ok: [testbed-manager] => { 2025-09-13 00:47:52.591325 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-09-13 00:47:52.591338 | orchestrator | } 2025-09-13 00:47:52.591350 | orchestrator | 2025-09-13 00:47:52.591362 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-09-13 00:47:52.591381 | orchestrator | Saturday 13 September 2025 00:46:40 +0000 (0:00:00.871) 0:00:01.724 **** 2025-09-13 00:47:52.591434 | orchestrator | ok: [testbed-manager] 2025-09-13 00:47:52.591455 | orchestrator | 2025-09-13 00:47:52.591466 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-09-13 00:47:52.591477 | orchestrator | Saturday 13 September 2025 00:46:42 +0000 (0:00:02.049) 0:00:03.774 **** 2025-09-13 00:47:52.591488 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-09-13 00:47:52.591499 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-09-13 00:47:52.591510 | orchestrator | 2025-09-13 00:47:52.591521 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-09-13 00:47:52.591532 | orchestrator | Saturday 13 September 2025 00:46:44 +0000 (0:00:02.374) 0:00:06.149 **** 2025-09-13 00:47:52.591543 | orchestrator | changed: [testbed-manager] 2025-09-13 00:47:52.591554 | orchestrator | 2025-09-13 00:47:52.591584 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-09-13 00:47:52.591596 | orchestrator | Saturday 13 September 2025 00:46:48 +0000 (0:00:04.228) 0:00:10.377 **** 2025-09-13 00:47:52.591606 | orchestrator | changed: [testbed-manager] 2025-09-13 00:47:52.591617 | orchestrator | 2025-09-13 00:47:52.591628 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-09-13 00:47:52.591639 | orchestrator | Saturday 13 September 2025 00:46:52 +0000 (0:00:03.443) 0:00:13.821 **** 2025-09-13 00:47:52.591650 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-09-13 00:47:52.591662 | orchestrator | ok: [testbed-manager] 2025-09-13 00:47:52.591675 | orchestrator | 2025-09-13 00:47:52.591687 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-09-13 00:47:52.591699 | orchestrator | Saturday 13 September 2025 00:47:23 +0000 (0:00:31.244) 0:00:45.066 **** 2025-09-13 00:47:52.591712 | orchestrator | changed: [testbed-manager] 2025-09-13 00:47:52.591724 | orchestrator | 2025-09-13 00:47:52.591736 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:47:52.591750 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:47:52.591763 | orchestrator | 2025-09-13 00:47:52.591776 | orchestrator | 2025-09-13 00:47:52.591788 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:47:52.591800 | orchestrator | Saturday 13 September 2025 00:47:26 +0000 (0:00:03.354) 0:00:48.420 **** 2025-09-13 00:47:52.591812 | orchestrator | =============================================================================== 2025-09-13 00:47:52.591824 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 31.24s 2025-09-13 00:47:52.591837 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 4.23s 2025-09-13 00:47:52.591849 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 3.44s 2025-09-13 00:47:52.591861 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.35s 2025-09-13 00:47:52.591874 | orchestrator | osism.services.homer : Create required directories ---------------------- 2.38s 2025-09-13 00:47:52.591886 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.05s 2025-09-13 00:47:52.591897 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.87s 2025-09-13 00:47:52.591909 | orchestrator | 2025-09-13 00:47:52.591922 | orchestrator | 2025-09-13 00:47:52.591933 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-09-13 00:47:52.591946 | orchestrator | 2025-09-13 00:47:52.591958 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-09-13 00:47:52.591969 | orchestrator | Saturday 13 September 2025 00:46:38 +0000 (0:00:01.631) 0:00:01.631 **** 2025-09-13 00:47:52.591982 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-09-13 00:47:52.591995 | orchestrator | 2025-09-13 00:47:52.592008 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-09-13 00:47:52.592039 | orchestrator | Saturday 13 September 2025 00:46:40 +0000 (0:00:01.406) 0:00:03.038 **** 2025-09-13 00:47:52.592051 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-09-13 00:47:52.592062 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-09-13 00:47:52.592073 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-09-13 00:47:52.592084 | orchestrator | 2025-09-13 00:47:52.592094 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-09-13 00:47:52.592105 | orchestrator | Saturday 13 September 2025 00:46:42 +0000 (0:00:02.534) 0:00:05.572 **** 2025-09-13 00:47:52.592116 | orchestrator | changed: [testbed-manager] 2025-09-13 00:47:52.592127 | orchestrator | 2025-09-13 00:47:52.592137 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-09-13 00:47:52.592156 | orchestrator | Saturday 13 September 2025 00:46:45 +0000 (0:00:02.727) 0:00:08.300 **** 2025-09-13 00:47:52.592183 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-09-13 00:47:52.592195 | orchestrator | ok: [testbed-manager] 2025-09-13 00:47:52.592205 | orchestrator | 2025-09-13 00:47:52.592216 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-09-13 00:47:52.592227 | orchestrator | Saturday 13 September 2025 00:47:23 +0000 (0:00:37.967) 0:00:46.267 **** 2025-09-13 00:47:52.592238 | orchestrator | changed: [testbed-manager] 2025-09-13 00:47:52.592248 | orchestrator | 2025-09-13 00:47:52.592265 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-09-13 00:47:52.592276 | orchestrator | Saturday 13 September 2025 00:47:25 +0000 (0:00:01.643) 0:00:47.911 **** 2025-09-13 00:47:52.592287 | orchestrator | ok: [testbed-manager] 2025-09-13 00:47:52.592297 | orchestrator | 2025-09-13 00:47:52.592308 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-09-13 00:47:52.592319 | orchestrator | Saturday 13 September 2025 00:47:25 +0000 (0:00:00.763) 0:00:48.675 **** 2025-09-13 00:47:52.592329 | orchestrator | changed: [testbed-manager] 2025-09-13 00:47:52.592340 | orchestrator | 2025-09-13 00:47:52.592351 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-09-13 00:47:52.592362 | orchestrator | Saturday 13 September 2025 00:47:28 +0000 (0:00:02.980) 0:00:51.656 **** 2025-09-13 00:47:52.592373 | orchestrator | changed: [testbed-manager] 2025-09-13 00:47:52.592383 | orchestrator | 2025-09-13 00:47:52.592394 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-09-13 00:47:52.592405 | orchestrator | Saturday 13 September 2025 00:47:29 +0000 (0:00:01.039) 0:00:52.695 **** 2025-09-13 00:47:52.592416 | orchestrator | changed: [testbed-manager] 2025-09-13 00:47:52.592426 | orchestrator | 2025-09-13 00:47:52.592437 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-09-13 00:47:52.592448 | orchestrator | Saturday 13 September 2025 00:47:31 +0000 (0:00:01.205) 0:00:53.900 **** 2025-09-13 00:47:52.592459 | orchestrator | ok: [testbed-manager] 2025-09-13 00:47:52.592469 | orchestrator | 2025-09-13 00:47:52.592480 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:47:52.592491 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:47:52.592502 | orchestrator | 2025-09-13 00:47:52.592513 | orchestrator | 2025-09-13 00:47:52.592523 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:47:52.592534 | orchestrator | Saturday 13 September 2025 00:47:31 +0000 (0:00:00.678) 0:00:54.579 **** 2025-09-13 00:47:52.592545 | orchestrator | =============================================================================== 2025-09-13 00:47:52.592556 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 37.97s 2025-09-13 00:47:52.592567 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.98s 2025-09-13 00:47:52.592577 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.73s 2025-09-13 00:47:52.592588 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.53s 2025-09-13 00:47:52.592599 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.64s 2025-09-13 00:47:52.592610 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 1.41s 2025-09-13 00:47:52.592620 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.21s 2025-09-13 00:47:52.592631 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.04s 2025-09-13 00:47:52.592641 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.76s 2025-09-13 00:47:52.592652 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.68s 2025-09-13 00:47:52.592663 | orchestrator | 2025-09-13 00:47:52.592680 | orchestrator | 2025-09-13 00:47:52.592690 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-13 00:47:52.592701 | orchestrator | 2025-09-13 00:47:52.592712 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-13 00:47:52.592723 | orchestrator | Saturday 13 September 2025 00:46:35 +0000 (0:00:00.580) 0:00:00.580 **** 2025-09-13 00:47:52.592734 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-09-13 00:47:52.592744 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-09-13 00:47:52.592755 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-09-13 00:47:52.592766 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-09-13 00:47:52.592777 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-09-13 00:47:52.592787 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-09-13 00:47:52.592798 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-09-13 00:47:52.592809 | orchestrator | 2025-09-13 00:47:52.592819 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-09-13 00:47:52.592830 | orchestrator | 2025-09-13 00:47:52.592841 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-09-13 00:47:52.592851 | orchestrator | Saturday 13 September 2025 00:46:39 +0000 (0:00:03.438) 0:00:04.018 **** 2025-09-13 00:47:52.592877 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:47:52.592896 | orchestrator | 2025-09-13 00:47:52.592907 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-09-13 00:47:52.592918 | orchestrator | Saturday 13 September 2025 00:46:42 +0000 (0:00:02.968) 0:00:06.986 **** 2025-09-13 00:47:52.592929 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:47:52.592940 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:47:52.592951 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:47:52.592962 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:47:52.592973 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:47:52.592990 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:47:52.593001 | orchestrator | ok: [testbed-manager] 2025-09-13 00:47:52.593011 | orchestrator | 2025-09-13 00:47:52.593037 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-09-13 00:47:52.593048 | orchestrator | Saturday 13 September 2025 00:46:44 +0000 (0:00:02.627) 0:00:09.614 **** 2025-09-13 00:47:52.593059 | orchestrator | ok: [testbed-manager] 2025-09-13 00:47:52.593070 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:47:52.593086 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:47:52.593097 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:47:52.593107 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:47:52.593118 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:47:52.593128 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:47:52.593139 | orchestrator | 2025-09-13 00:47:52.593150 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-09-13 00:47:52.593161 | orchestrator | Saturday 13 September 2025 00:46:50 +0000 (0:00:05.360) 0:00:14.974 **** 2025-09-13 00:47:52.593171 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:47:52.593182 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:47:52.593193 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:47:52.593203 | orchestrator | changed: [testbed-manager] 2025-09-13 00:47:52.593214 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:47:52.593225 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:47:52.593236 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:47:52.593246 | orchestrator | 2025-09-13 00:47:52.593257 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-09-13 00:47:52.593268 | orchestrator | Saturday 13 September 2025 00:46:53 +0000 (0:00:03.593) 0:00:18.568 **** 2025-09-13 00:47:52.593279 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:47:52.593296 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:47:52.593307 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:47:52.593318 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:47:52.593328 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:47:52.593339 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:47:52.593350 | orchestrator | changed: [testbed-manager] 2025-09-13 00:47:52.593361 | orchestrator | 2025-09-13 00:47:52.593371 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-09-13 00:47:52.593382 | orchestrator | Saturday 13 September 2025 00:47:05 +0000 (0:00:11.770) 0:00:30.338 **** 2025-09-13 00:47:52.593393 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:47:52.593404 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:47:52.593414 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:47:52.593425 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:47:52.593436 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:47:52.593446 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:47:52.593457 | orchestrator | changed: [testbed-manager] 2025-09-13 00:47:52.593468 | orchestrator | 2025-09-13 00:47:52.593478 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-09-13 00:47:52.593489 | orchestrator | Saturday 13 September 2025 00:47:31 +0000 (0:00:25.765) 0:00:56.103 **** 2025-09-13 00:47:52.593501 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:47:52.593514 | orchestrator | 2025-09-13 00:47:52.593525 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-09-13 00:47:52.593536 | orchestrator | Saturday 13 September 2025 00:47:32 +0000 (0:00:01.603) 0:00:57.707 **** 2025-09-13 00:47:52.593547 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-09-13 00:47:52.593558 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-09-13 00:47:52.593569 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-09-13 00:47:52.593579 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-09-13 00:47:52.593590 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-09-13 00:47:52.593601 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-09-13 00:47:52.593612 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-09-13 00:47:52.593622 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-09-13 00:47:52.593633 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-09-13 00:47:52.593644 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-09-13 00:47:52.593655 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-09-13 00:47:52.593666 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-09-13 00:47:52.593676 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-09-13 00:47:52.593687 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-09-13 00:47:52.593698 | orchestrator | 2025-09-13 00:47:52.593709 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-09-13 00:47:52.593720 | orchestrator | Saturday 13 September 2025 00:47:36 +0000 (0:00:03.895) 0:01:01.603 **** 2025-09-13 00:47:52.593731 | orchestrator | ok: [testbed-manager] 2025-09-13 00:47:52.593742 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:47:52.593752 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:47:52.593763 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:47:52.593774 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:47:52.593785 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:47:52.593795 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:47:52.593806 | orchestrator | 2025-09-13 00:47:52.593817 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-09-13 00:47:52.593828 | orchestrator | Saturday 13 September 2025 00:47:37 +0000 (0:00:01.006) 0:01:02.609 **** 2025-09-13 00:47:52.593844 | orchestrator | changed: [testbed-manager] 2025-09-13 00:47:52.593855 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:47:52.593866 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:47:52.593876 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:47:52.593887 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:47:52.593898 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:47:52.593908 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:47:52.593919 | orchestrator | 2025-09-13 00:47:52.593930 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-09-13 00:47:52.593947 | orchestrator | Saturday 13 September 2025 00:47:39 +0000 (0:00:01.724) 0:01:04.333 **** 2025-09-13 00:47:52.593958 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:47:52.593968 | orchestrator | ok: [testbed-manager] 2025-09-13 00:47:52.593979 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:47:52.593989 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:47:52.594000 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:47:52.594011 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:47:52.594118 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:47:52.594130 | orchestrator | 2025-09-13 00:47:52.594142 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-09-13 00:47:52.594153 | orchestrator | Saturday 13 September 2025 00:47:41 +0000 (0:00:01.991) 0:01:06.325 **** 2025-09-13 00:47:52.594163 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:47:52.594174 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:47:52.594185 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:47:52.594196 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:47:52.594206 | orchestrator | ok: [testbed-manager] 2025-09-13 00:47:52.594216 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:47:52.594227 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:47:52.594238 | orchestrator | 2025-09-13 00:47:52.594249 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-09-13 00:47:52.594260 | orchestrator | Saturday 13 September 2025 00:47:44 +0000 (0:00:02.493) 0:01:08.819 **** 2025-09-13 00:47:52.594270 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-09-13 00:47:52.594283 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:47:52.594294 | orchestrator | 2025-09-13 00:47:52.594305 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-09-13 00:47:52.594316 | orchestrator | Saturday 13 September 2025 00:47:46 +0000 (0:00:01.961) 0:01:10.780 **** 2025-09-13 00:47:52.594327 | orchestrator | changed: [testbed-manager] 2025-09-13 00:47:52.594337 | orchestrator | 2025-09-13 00:47:52.594348 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-09-13 00:47:52.594359 | orchestrator | Saturday 13 September 2025 00:47:47 +0000 (0:00:01.790) 0:01:12.571 **** 2025-09-13 00:47:52.594370 | orchestrator | changed: [testbed-manager] 2025-09-13 00:47:52.594381 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:47:52.594391 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:47:52.594402 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:47:52.594412 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:47:52.594423 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:47:52.594433 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:47:52.594444 | orchestrator | 2025-09-13 00:47:52.594455 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:47:52.594503 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:47:52.594516 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:47:52.594527 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:47:52.594546 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:47:52.594557 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:47:52.594568 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:47:52.594579 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:47:52.594589 | orchestrator | 2025-09-13 00:47:52.594600 | orchestrator | 2025-09-13 00:47:52.594611 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:47:52.594621 | orchestrator | Saturday 13 September 2025 00:47:50 +0000 (0:00:02.774) 0:01:15.345 **** 2025-09-13 00:47:52.594632 | orchestrator | =============================================================================== 2025-09-13 00:47:52.594643 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 25.77s 2025-09-13 00:47:52.594654 | orchestrator | osism.services.netdata : Add repository -------------------------------- 11.77s 2025-09-13 00:47:52.594664 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 5.36s 2025-09-13 00:47:52.594675 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.90s 2025-09-13 00:47:52.594685 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.59s 2025-09-13 00:47:52.594696 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.44s 2025-09-13 00:47:52.594706 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.97s 2025-09-13 00:47:52.594717 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 2.77s 2025-09-13 00:47:52.594728 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.63s 2025-09-13 00:47:52.594738 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.49s 2025-09-13 00:47:52.594749 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.99s 2025-09-13 00:47:52.594767 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.96s 2025-09-13 00:47:52.594778 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.79s 2025-09-13 00:47:52.594788 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.72s 2025-09-13 00:47:52.594799 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.60s 2025-09-13 00:47:52.594814 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.01s 2025-09-13 00:47:55.624997 | orchestrator | 2025-09-13 00:47:55 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:47:55.626188 | orchestrator | 2025-09-13 00:47:55 | INFO  | Task 75e68ad3-c3c1-47a7-b4d6-7c5aae92c729 is in state STARTED 2025-09-13 00:47:55.628331 | orchestrator | 2025-09-13 00:47:55 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:47:55.628359 | orchestrator | 2025-09-13 00:47:55 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:47:55.628684 | orchestrator | 2025-09-13 00:47:55 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:47:58.669243 | orchestrator | 2025-09-13 00:47:58 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:47:58.669966 | orchestrator | 2025-09-13 00:47:58 | INFO  | Task 75e68ad3-c3c1-47a7-b4d6-7c5aae92c729 is in state SUCCESS 2025-09-13 00:47:58.672374 | orchestrator | 2025-09-13 00:47:58 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:47:58.674080 | orchestrator | 2025-09-13 00:47:58 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:47:58.674109 | orchestrator | 2025-09-13 00:47:58 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:48:01.726333 | orchestrator | 2025-09-13 00:48:01 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:48:01.728645 | orchestrator | 2025-09-13 00:48:01 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:48:01.731523 | orchestrator | 2025-09-13 00:48:01 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:48:01.731544 | orchestrator | 2025-09-13 00:48:01 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:48:04.778181 | orchestrator | 2025-09-13 00:48:04 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:48:04.779746 | orchestrator | 2025-09-13 00:48:04 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:48:04.781590 | orchestrator | 2025-09-13 00:48:04 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:48:04.782198 | orchestrator | 2025-09-13 00:48:04 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:48:07.814860 | orchestrator | 2025-09-13 00:48:07 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:48:07.815185 | orchestrator | 2025-09-13 00:48:07 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:48:07.816327 | orchestrator | 2025-09-13 00:48:07 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:48:07.816349 | orchestrator | 2025-09-13 00:48:07 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:48:10.862754 | orchestrator | 2025-09-13 00:48:10 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:48:10.863718 | orchestrator | 2025-09-13 00:48:10 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:48:10.864759 | orchestrator | 2025-09-13 00:48:10 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:48:10.864965 | orchestrator | 2025-09-13 00:48:10 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:48:13.915594 | orchestrator | 2025-09-13 00:48:13 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:48:13.916861 | orchestrator | 2025-09-13 00:48:13 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:48:13.919710 | orchestrator | 2025-09-13 00:48:13 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:48:13.919736 | orchestrator | 2025-09-13 00:48:13 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:48:16.961406 | orchestrator | 2025-09-13 00:48:16 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:48:16.963212 | orchestrator | 2025-09-13 00:48:16 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:48:16.964709 | orchestrator | 2025-09-13 00:48:16 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:48:16.964734 | orchestrator | 2025-09-13 00:48:16 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:48:19.998001 | orchestrator | 2025-09-13 00:48:19 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:48:19.999562 | orchestrator | 2025-09-13 00:48:19 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:48:20.000923 | orchestrator | 2025-09-13 00:48:20 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:48:20.001343 | orchestrator | 2025-09-13 00:48:20 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:48:23.050863 | orchestrator | 2025-09-13 00:48:23 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:48:23.053200 | orchestrator | 2025-09-13 00:48:23 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:48:23.054909 | orchestrator | 2025-09-13 00:48:23 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:48:23.057082 | orchestrator | 2025-09-13 00:48:23 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:48:26.095136 | orchestrator | 2025-09-13 00:48:26 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:48:26.097881 | orchestrator | 2025-09-13 00:48:26 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:48:26.099136 | orchestrator | 2025-09-13 00:48:26 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:48:26.099594 | orchestrator | 2025-09-13 00:48:26 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:48:29.140143 | orchestrator | 2025-09-13 00:48:29 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:48:29.142502 | orchestrator | 2025-09-13 00:48:29 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:48:29.144627 | orchestrator | 2025-09-13 00:48:29 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:48:29.144650 | orchestrator | 2025-09-13 00:48:29 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:48:32.183793 | orchestrator | 2025-09-13 00:48:32 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:48:32.184635 | orchestrator | 2025-09-13 00:48:32 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:48:32.187023 | orchestrator | 2025-09-13 00:48:32 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:48:32.187302 | orchestrator | 2025-09-13 00:48:32 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:48:35.223259 | orchestrator | 2025-09-13 00:48:35 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:48:35.225767 | orchestrator | 2025-09-13 00:48:35 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:48:35.229277 | orchestrator | 2025-09-13 00:48:35 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:48:35.229305 | orchestrator | 2025-09-13 00:48:35 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:48:38.269633 | orchestrator | 2025-09-13 00:48:38 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:48:38.269722 | orchestrator | 2025-09-13 00:48:38 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:48:38.271303 | orchestrator | 2025-09-13 00:48:38 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:48:38.271326 | orchestrator | 2025-09-13 00:48:38 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:48:41.320220 | orchestrator | 2025-09-13 00:48:41 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:48:41.324805 | orchestrator | 2025-09-13 00:48:41 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:48:41.324842 | orchestrator | 2025-09-13 00:48:41 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:48:41.325019 | orchestrator | 2025-09-13 00:48:41 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:48:44.375397 | orchestrator | 2025-09-13 00:48:44 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:48:44.375493 | orchestrator | 2025-09-13 00:48:44 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:48:44.376845 | orchestrator | 2025-09-13 00:48:44 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:48:44.377647 | orchestrator | 2025-09-13 00:48:44 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:48:47.420610 | orchestrator | 2025-09-13 00:48:47 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:48:47.422569 | orchestrator | 2025-09-13 00:48:47 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:48:47.424256 | orchestrator | 2025-09-13 00:48:47 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:48:47.424375 | orchestrator | 2025-09-13 00:48:47 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:48:50.465929 | orchestrator | 2025-09-13 00:48:50 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:48:50.466977 | orchestrator | 2025-09-13 00:48:50 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:48:50.467762 | orchestrator | 2025-09-13 00:48:50 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:48:50.467789 | orchestrator | 2025-09-13 00:48:50 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:48:53.511900 | orchestrator | 2025-09-13 00:48:53 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:48:53.513722 | orchestrator | 2025-09-13 00:48:53 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:48:53.515089 | orchestrator | 2025-09-13 00:48:53 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:48:53.515418 | orchestrator | 2025-09-13 00:48:53 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:48:56.556826 | orchestrator | 2025-09-13 00:48:56 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:48:56.558268 | orchestrator | 2025-09-13 00:48:56 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:48:56.560358 | orchestrator | 2025-09-13 00:48:56 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:48:56.560407 | orchestrator | 2025-09-13 00:48:56 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:48:59.594866 | orchestrator | 2025-09-13 00:48:59 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:48:59.596395 | orchestrator | 2025-09-13 00:48:59 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:48:59.598133 | orchestrator | 2025-09-13 00:48:59 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:48:59.598174 | orchestrator | 2025-09-13 00:48:59 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:49:02.635008 | orchestrator | 2025-09-13 00:49:02 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:49:02.636290 | orchestrator | 2025-09-13 00:49:02 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:49:02.637443 | orchestrator | 2025-09-13 00:49:02 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:49:02.637467 | orchestrator | 2025-09-13 00:49:02 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:49:05.683590 | orchestrator | 2025-09-13 00:49:05 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:49:05.685546 | orchestrator | 2025-09-13 00:49:05 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:49:05.687473 | orchestrator | 2025-09-13 00:49:05 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:49:05.688638 | orchestrator | 2025-09-13 00:49:05 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:49:08.735363 | orchestrator | 2025-09-13 00:49:08 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:49:08.736005 | orchestrator | 2025-09-13 00:49:08 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:49:08.736994 | orchestrator | 2025-09-13 00:49:08 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state STARTED 2025-09-13 00:49:08.737017 | orchestrator | 2025-09-13 00:49:08 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:49:11.777507 | orchestrator | 2025-09-13 00:49:11 | INFO  | Task df3bcc9d-f941-4d95-83a5-1fffbd27f62d is in state STARTED 2025-09-13 00:49:11.777599 | orchestrator | 2025-09-13 00:49:11 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:49:11.777613 | orchestrator | 2025-09-13 00:49:11 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:49:11.785665 | orchestrator | 2025-09-13 00:49:11.785711 | orchestrator | 2025-09-13 00:49:11.785723 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-09-13 00:49:11.785735 | orchestrator | 2025-09-13 00:49:11.785746 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-09-13 00:49:11.785758 | orchestrator | Saturday 13 September 2025 00:47:00 +0000 (0:00:00.310) 0:00:00.310 **** 2025-09-13 00:49:11.785776 | orchestrator | ok: [testbed-manager] 2025-09-13 00:49:11.785788 | orchestrator | 2025-09-13 00:49:11.785800 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-09-13 00:49:11.785811 | orchestrator | Saturday 13 September 2025 00:47:01 +0000 (0:00:00.934) 0:00:01.245 **** 2025-09-13 00:49:11.785822 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-09-13 00:49:11.785833 | orchestrator | 2025-09-13 00:49:11.785844 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-09-13 00:49:11.785855 | orchestrator | Saturday 13 September 2025 00:47:02 +0000 (0:00:00.716) 0:00:01.962 **** 2025-09-13 00:49:11.785865 | orchestrator | changed: [testbed-manager] 2025-09-13 00:49:11.785876 | orchestrator | 2025-09-13 00:49:11.785887 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-09-13 00:49:11.785898 | orchestrator | Saturday 13 September 2025 00:47:03 +0000 (0:00:01.102) 0:00:03.064 **** 2025-09-13 00:49:11.785908 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-09-13 00:49:11.785919 | orchestrator | ok: [testbed-manager] 2025-09-13 00:49:11.785930 | orchestrator | 2025-09-13 00:49:11.785941 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-09-13 00:49:11.785952 | orchestrator | Saturday 13 September 2025 00:47:53 +0000 (0:00:49.603) 0:00:52.668 **** 2025-09-13 00:49:11.785962 | orchestrator | changed: [testbed-manager] 2025-09-13 00:49:11.785973 | orchestrator | 2025-09-13 00:49:11.785984 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:49:11.785995 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:49:11.786007 | orchestrator | 2025-09-13 00:49:11.786101 | orchestrator | 2025-09-13 00:49:11.786117 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:49:11.786127 | orchestrator | Saturday 13 September 2025 00:47:56 +0000 (0:00:03.825) 0:00:56.494 **** 2025-09-13 00:49:11.786158 | orchestrator | =============================================================================== 2025-09-13 00:49:11.786170 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 49.60s 2025-09-13 00:49:11.786180 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.83s 2025-09-13 00:49:11.786191 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.10s 2025-09-13 00:49:11.786202 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.93s 2025-09-13 00:49:11.786213 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.72s 2025-09-13 00:49:11.786224 | orchestrator | 2025-09-13 00:49:11.786235 | orchestrator | 2025-09-13 00:49:11.786245 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-09-13 00:49:11.786256 | orchestrator | 2025-09-13 00:49:11.786267 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-13 00:49:11.786278 | orchestrator | Saturday 13 September 2025 00:46:29 +0000 (0:00:00.317) 0:00:00.317 **** 2025-09-13 00:49:11.786289 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:49:11.786303 | orchestrator | 2025-09-13 00:49:11.786313 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-09-13 00:49:11.786324 | orchestrator | Saturday 13 September 2025 00:46:30 +0000 (0:00:01.466) 0:00:01.783 **** 2025-09-13 00:49:11.786335 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-13 00:49:11.786345 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-13 00:49:11.786356 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-13 00:49:11.786367 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-13 00:49:11.786378 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-13 00:49:11.786389 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-13 00:49:11.786399 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-13 00:49:11.786410 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-13 00:49:11.786421 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-13 00:49:11.786432 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-13 00:49:11.786443 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-13 00:49:11.786454 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-13 00:49:11.786465 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-13 00:49:11.786476 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-13 00:49:11.786486 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-13 00:49:11.786497 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-13 00:49:11.786530 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-13 00:49:11.786580 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-13 00:49:11.786601 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-13 00:49:11.786612 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-13 00:49:11.786623 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-13 00:49:11.786634 | orchestrator | 2025-09-13 00:49:11.786644 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-13 00:49:11.786660 | orchestrator | Saturday 13 September 2025 00:46:35 +0000 (0:00:04.477) 0:00:06.260 **** 2025-09-13 00:49:11.786670 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:49:11.786682 | orchestrator | 2025-09-13 00:49:11.786693 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-09-13 00:49:11.786703 | orchestrator | Saturday 13 September 2025 00:46:36 +0000 (0:00:01.596) 0:00:07.857 **** 2025-09-13 00:49:11.786719 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-13 00:49:11.786736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-13 00:49:11.786747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-13 00:49:11.786759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-13 00:49:11.786770 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-13 00:49:11.786791 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.786807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.786826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.786838 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-13 00:49:11.786849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.786861 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-13 00:49:11.786872 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.786884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.786918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.786942 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.786955 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.786966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.786977 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.786989 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.787001 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.787012 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.787023 | orchestrator | 2025-09-13 00:49:11.787034 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-09-13 00:49:11.787045 | orchestrator | Saturday 13 September 2025 00:46:43 +0000 (0:00:07.193) 0:00:15.050 **** 2025-09-13 00:49:11.787068 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-13 00:49:11.787154 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:49:11.787168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-13 00:49:11.787180 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:49:11.787192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:49:11.787203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:49:11.787214 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:49:11.787226 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:49:11.787237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-13 00:49:11.787249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:49:11.787281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:49:11.787293 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:49:11.787304 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-13 00:49:11.787316 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:49:11.787327 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:49:11.787338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-13 00:49:11.787349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:49:11.787359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:49:11.787375 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-13 00:49:11.787394 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:49:11.787405 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:49:11.787415 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:49:11.787424 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:49:11.787434 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:49:11.787444 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-13 00:49:11.787454 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:49:11.787464 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:49:11.787474 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:49:11.787484 | orchestrator | 2025-09-13 00:49:11.787494 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-09-13 00:49:11.787504 | orchestrator | Saturday 13 September 2025 00:46:46 +0000 (0:00:02.417) 0:00:17.468 **** 2025-09-13 00:49:11.787519 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-13 00:49:11.787530 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:49:11.787548 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:49:11.787559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-13 00:49:11.787570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:49:11.787580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:49:11.787590 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:49:11.787599 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:49:11.787609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-13 00:49:11.787619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:49:11.787635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:49:11.787645 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:49:11.787655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-13 00:49:11.787678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:49:11.787688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:49:11.787698 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-13 00:49:11.787709 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:49:11.787719 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:49:11.787734 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-13 00:49:11.787744 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:49:11.787754 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:49:11.787769 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:49:11.787779 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:49:11.787789 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:49:11.787802 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-13 00:49:11.787813 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:49:11.787823 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:49:11.787833 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:49:11.787842 | orchestrator | 2025-09-13 00:49:11.787852 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-09-13 00:49:11.787862 | orchestrator | Saturday 13 September 2025 00:46:51 +0000 (0:00:04.829) 0:00:22.298 **** 2025-09-13 00:49:11.787871 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:49:11.787886 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:49:11.787896 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:49:11.787905 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:49:11.787915 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:49:11.787924 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:49:11.787934 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:49:11.787944 | orchestrator | 2025-09-13 00:49:11.787953 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-09-13 00:49:11.787963 | orchestrator | Saturday 13 September 2025 00:46:52 +0000 (0:00:01.801) 0:00:24.099 **** 2025-09-13 00:49:11.787972 | orchestrator | skipping: [testbed-manager] 2025-09-13 00:49:11.787982 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:49:11.787991 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:49:11.788001 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:49:11.788010 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:49:11.788020 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:49:11.788029 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:49:11.788039 | orchestrator | 2025-09-13 00:49:11.788048 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-09-13 00:49:11.788058 | orchestrator | Saturday 13 September 2025 00:46:54 +0000 (0:00:01.936) 0:00:26.037 **** 2025-09-13 00:49:11.788068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-13 00:49:11.788127 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-13 00:49:11.788148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-13 00:49:11.788159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-13 00:49:11.788169 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-13 00:49:11.788185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.788195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.788209 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-13 00:49:11.788220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.788230 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-13 00:49:11.788249 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.788260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.788275 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.788285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.788295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.788305 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.788315 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.788336 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.788351 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.788361 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.788376 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.788386 | orchestrator | 2025-09-13 00:49:11.788410 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-09-13 00:49:11.788427 | orchestrator | Saturday 13 September 2025 00:47:01 +0000 (0:00:06.437) 0:00:32.475 **** 2025-09-13 00:49:11.788444 | orchestrator | [WARNING]: Skipped 2025-09-13 00:49:11.788459 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-09-13 00:49:11.788476 | orchestrator | to this access issue: 2025-09-13 00:49:11.788492 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-09-13 00:49:11.788508 | orchestrator | directory 2025-09-13 00:49:11.788522 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-13 00:49:11.788532 | orchestrator | 2025-09-13 00:49:11.788542 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-09-13 00:49:11.788551 | orchestrator | Saturday 13 September 2025 00:47:02 +0000 (0:00:01.374) 0:00:33.849 **** 2025-09-13 00:49:11.788561 | orchestrator | [WARNING]: Skipped 2025-09-13 00:49:11.788571 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-09-13 00:49:11.788580 | orchestrator | to this access issue: 2025-09-13 00:49:11.788590 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-09-13 00:49:11.788599 | orchestrator | directory 2025-09-13 00:49:11.788609 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-13 00:49:11.788618 | orchestrator | 2025-09-13 00:49:11.788628 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-09-13 00:49:11.788637 | orchestrator | Saturday 13 September 2025 00:47:04 +0000 (0:00:01.269) 0:00:35.119 **** 2025-09-13 00:49:11.788647 | orchestrator | [WARNING]: Skipped 2025-09-13 00:49:11.788656 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-09-13 00:49:11.788666 | orchestrator | to this access issue: 2025-09-13 00:49:11.788675 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-09-13 00:49:11.788685 | orchestrator | directory 2025-09-13 00:49:11.788694 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-13 00:49:11.788704 | orchestrator | 2025-09-13 00:49:11.788713 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-09-13 00:49:11.788722 | orchestrator | Saturday 13 September 2025 00:47:04 +0000 (0:00:00.977) 0:00:36.097 **** 2025-09-13 00:49:11.788732 | orchestrator | [WARNING]: Skipped 2025-09-13 00:49:11.788741 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-09-13 00:49:11.788751 | orchestrator | to this access issue: 2025-09-13 00:49:11.788760 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-09-13 00:49:11.788769 | orchestrator | directory 2025-09-13 00:49:11.788779 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-13 00:49:11.788788 | orchestrator | 2025-09-13 00:49:11.788798 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-09-13 00:49:11.788807 | orchestrator | Saturday 13 September 2025 00:47:05 +0000 (0:00:00.749) 0:00:36.846 **** 2025-09-13 00:49:11.788817 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:49:11.788826 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:49:11.788836 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:49:11.788845 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:49:11.788855 | orchestrator | changed: [testbed-manager] 2025-09-13 00:49:11.788870 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:49:11.788880 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:49:11.788889 | orchestrator | 2025-09-13 00:49:11.788899 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-09-13 00:49:11.788908 | orchestrator | Saturday 13 September 2025 00:47:10 +0000 (0:00:05.066) 0:00:41.913 **** 2025-09-13 00:49:11.788918 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-13 00:49:11.788945 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-13 00:49:11.788955 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-13 00:49:11.788972 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-13 00:49:11.788982 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-13 00:49:11.788996 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-13 00:49:11.789006 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-13 00:49:11.789016 | orchestrator | 2025-09-13 00:49:11.789035 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-09-13 00:49:11.789044 | orchestrator | Saturday 13 September 2025 00:47:14 +0000 (0:00:03.496) 0:00:45.410 **** 2025-09-13 00:49:11.789054 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:49:11.789064 | orchestrator | changed: [testbed-manager] 2025-09-13 00:49:11.789073 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:49:11.789102 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:49:11.789112 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:49:11.789121 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:49:11.789130 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:49:11.789140 | orchestrator | 2025-09-13 00:49:11.789149 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-09-13 00:49:11.789159 | orchestrator | Saturday 13 September 2025 00:47:18 +0000 (0:00:03.843) 0:00:49.254 **** 2025-09-13 00:49:11.789169 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-13 00:49:11.789179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:49:11.789189 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-13 00:49:11.789199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:49:11.789215 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.789236 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-13 00:49:11.789250 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:49:11.789260 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-13 00:49:11.789270 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:49:11.789280 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-13 00:49:11.789290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:49:11.789306 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.789316 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.789331 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.789348 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-13 00:49:11.789359 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:49:11.789369 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.789379 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.789389 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-13 00:49:11.789405 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:49:11.789415 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.789425 | orchestrator | 2025-09-13 00:49:11.789435 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-09-13 00:49:11.789445 | orchestrator | Saturday 13 September 2025 00:47:22 +0000 (0:00:04.421) 0:00:53.676 **** 2025-09-13 00:49:11.789454 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-13 00:49:11.789464 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-13 00:49:11.789474 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-13 00:49:11.789491 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-13 00:49:11.789501 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-13 00:49:11.789514 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-13 00:49:11.789524 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-13 00:49:11.789533 | orchestrator | 2025-09-13 00:49:11.789543 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-09-13 00:49:11.789552 | orchestrator | Saturday 13 September 2025 00:47:25 +0000 (0:00:03.095) 0:00:56.771 **** 2025-09-13 00:49:11.789562 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-13 00:49:11.789571 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-13 00:49:11.789581 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-13 00:49:11.789591 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-13 00:49:11.789600 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-13 00:49:11.789610 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-13 00:49:11.789619 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-13 00:49:11.789629 | orchestrator | 2025-09-13 00:49:11.789638 | orchestrator | TASK [common : Check common containers] **************************************** 2025-09-13 00:49:11.789648 | orchestrator | Saturday 13 September 2025 00:47:28 +0000 (0:00:02.619) 0:00:59.391 **** 2025-09-13 00:49:11.789658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-13 00:49:11.789673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-13 00:49:11.789684 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-13 00:49:11.789694 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-13 00:49:11.789704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-13 00:49:11.789858 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-13 00:49:11.789874 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-13 00:49:11.789884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.789900 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.789910 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.789921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.789931 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.789959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.789970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.789980 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.789997 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.790007 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.790055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.790069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.790102 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.790113 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:49:11.790123 | orchestrator | 2025-09-13 00:49:11.790138 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-09-13 00:49:11.790148 | orchestrator | Saturday 13 September 2025 00:47:32 +0000 (0:00:04.418) 0:01:03.810 **** 2025-09-13 00:49:11.790158 | orchestrator | changed: [testbed-manager] 2025-09-13 00:49:11.790167 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:49:11.790177 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:49:11.790186 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:49:11.790196 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:49:11.790205 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:49:11.790215 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:49:11.790224 | orchestrator | 2025-09-13 00:49:11.790234 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-09-13 00:49:11.790249 | orchestrator | Saturday 13 September 2025 00:47:34 +0000 (0:00:01.627) 0:01:05.437 **** 2025-09-13 00:49:11.790258 | orchestrator | changed: [testbed-manager] 2025-09-13 00:49:11.790274 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:49:11.790284 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:49:11.790294 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:49:11.790303 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:49:11.790313 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:49:11.790322 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:49:11.790332 | orchestrator | 2025-09-13 00:49:11.790341 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-13 00:49:11.790351 | orchestrator | Saturday 13 September 2025 00:47:35 +0000 (0:00:01.171) 0:01:06.608 **** 2025-09-13 00:49:11.790361 | orchestrator | 2025-09-13 00:49:11.790370 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-13 00:49:11.790380 | orchestrator | Saturday 13 September 2025 00:47:35 +0000 (0:00:00.078) 0:01:06.687 **** 2025-09-13 00:49:11.790390 | orchestrator | 2025-09-13 00:49:11.790399 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-13 00:49:11.790408 | orchestrator | Saturday 13 September 2025 00:47:35 +0000 (0:00:00.072) 0:01:06.759 **** 2025-09-13 00:49:11.790418 | orchestrator | 2025-09-13 00:49:11.790428 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-13 00:49:11.790437 | orchestrator | Saturday 13 September 2025 00:47:35 +0000 (0:00:00.088) 0:01:06.848 **** 2025-09-13 00:49:11.790447 | orchestrator | 2025-09-13 00:49:11.790456 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-13 00:49:11.790466 | orchestrator | Saturday 13 September 2025 00:47:35 +0000 (0:00:00.195) 0:01:07.044 **** 2025-09-13 00:49:11.790475 | orchestrator | 2025-09-13 00:49:11.790485 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-13 00:49:11.790496 | orchestrator | Saturday 13 September 2025 00:47:35 +0000 (0:00:00.061) 0:01:07.105 **** 2025-09-13 00:49:11.790507 | orchestrator | 2025-09-13 00:49:11.790518 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-13 00:49:11.790529 | orchestrator | Saturday 13 September 2025 00:47:36 +0000 (0:00:00.091) 0:01:07.197 **** 2025-09-13 00:49:11.790540 | orchestrator | 2025-09-13 00:49:11.790551 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-09-13 00:49:11.790561 | orchestrator | Saturday 13 September 2025 00:47:36 +0000 (0:00:00.086) 0:01:07.284 **** 2025-09-13 00:49:11.790572 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:49:11.790584 | orchestrator | changed: [testbed-manager] 2025-09-13 00:49:11.790595 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:49:11.790606 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:49:11.790618 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:49:11.790629 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:49:11.790639 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:49:11.790650 | orchestrator | 2025-09-13 00:49:11.790661 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-09-13 00:49:11.790672 | orchestrator | Saturday 13 September 2025 00:48:17 +0000 (0:00:41.465) 0:01:48.750 **** 2025-09-13 00:49:11.790682 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:49:11.790693 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:49:11.790704 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:49:11.790715 | orchestrator | changed: [testbed-manager] 2025-09-13 00:49:11.790726 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:49:11.790736 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:49:11.790747 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:49:11.790757 | orchestrator | 2025-09-13 00:49:11.790768 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-09-13 00:49:11.790779 | orchestrator | Saturday 13 September 2025 00:48:58 +0000 (0:00:40.610) 0:02:29.360 **** 2025-09-13 00:49:11.790790 | orchestrator | ok: [testbed-manager] 2025-09-13 00:49:11.790801 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:49:11.790812 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:49:11.790823 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:49:11.790839 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:49:11.790849 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:49:11.790858 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:49:11.790868 | orchestrator | 2025-09-13 00:49:11.790878 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-09-13 00:49:11.790887 | orchestrator | Saturday 13 September 2025 00:49:00 +0000 (0:00:02.024) 0:02:31.384 **** 2025-09-13 00:49:11.790897 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:49:11.790907 | orchestrator | changed: [testbed-manager] 2025-09-13 00:49:11.790916 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:49:11.790926 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:49:11.790935 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:49:11.790945 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:49:11.790954 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:49:11.790964 | orchestrator | 2025-09-13 00:49:11.790974 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:49:11.790984 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-13 00:49:11.790995 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-13 00:49:11.791009 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-13 00:49:11.791020 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-13 00:49:11.791033 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-13 00:49:11.791043 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-13 00:49:11.791053 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-13 00:49:11.791062 | orchestrator | 2025-09-13 00:49:11.791072 | orchestrator | 2025-09-13 00:49:11.791124 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:49:11.791134 | orchestrator | Saturday 13 September 2025 00:49:09 +0000 (0:00:09.452) 0:02:40.837 **** 2025-09-13 00:49:11.791144 | orchestrator | =============================================================================== 2025-09-13 00:49:11.791154 | orchestrator | common : Restart fluentd container ------------------------------------- 41.47s 2025-09-13 00:49:11.791163 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 40.61s 2025-09-13 00:49:11.791173 | orchestrator | common : Restart cron container ----------------------------------------- 9.45s 2025-09-13 00:49:11.791183 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 7.19s 2025-09-13 00:49:11.791192 | orchestrator | common : Copying over config.json files for services -------------------- 6.44s 2025-09-13 00:49:11.791201 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 5.07s 2025-09-13 00:49:11.791208 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 4.83s 2025-09-13 00:49:11.791216 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.48s 2025-09-13 00:49:11.791224 | orchestrator | common : Ensuring config directories have correct owner and permission --- 4.42s 2025-09-13 00:49:11.791232 | orchestrator | common : Check common containers ---------------------------------------- 4.42s 2025-09-13 00:49:11.791240 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.84s 2025-09-13 00:49:11.791248 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.50s 2025-09-13 00:49:11.791261 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.10s 2025-09-13 00:49:11.791269 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.62s 2025-09-13 00:49:11.791276 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.42s 2025-09-13 00:49:11.791284 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.02s 2025-09-13 00:49:11.791292 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.94s 2025-09-13 00:49:11.791300 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.80s 2025-09-13 00:49:11.791308 | orchestrator | common : Creating log volume -------------------------------------------- 1.63s 2025-09-13 00:49:11.791315 | orchestrator | common : include_tasks -------------------------------------------------- 1.60s 2025-09-13 00:49:11.791323 | orchestrator | 2025-09-13 00:49:11 | INFO  | Task 57545b78-894f-4f76-ad34-eba2455e58fe is in state SUCCESS 2025-09-13 00:49:11.791331 | orchestrator | 2025-09-13 00:49:11 | INFO  | Task 29d62ab4-14da-424d-99bd-d5beeec6e429 is in state STARTED 2025-09-13 00:49:11.791340 | orchestrator | 2025-09-13 00:49:11 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:49:11.791347 | orchestrator | 2025-09-13 00:49:11 | INFO  | Task 05c9d431-e79b-4b57-a053-94e6bbc24c1a is in state STARTED 2025-09-13 00:49:11.791355 | orchestrator | 2025-09-13 00:49:11 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:49:14.838271 | orchestrator | 2025-09-13 00:49:14 | INFO  | Task df3bcc9d-f941-4d95-83a5-1fffbd27f62d is in state STARTED 2025-09-13 00:49:14.864500 | orchestrator | 2025-09-13 00:49:14 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:49:14.864551 | orchestrator | 2025-09-13 00:49:14 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:49:14.864564 | orchestrator | 2025-09-13 00:49:14 | INFO  | Task 29d62ab4-14da-424d-99bd-d5beeec6e429 is in state STARTED 2025-09-13 00:49:14.864576 | orchestrator | 2025-09-13 00:49:14 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:49:14.864587 | orchestrator | 2025-09-13 00:49:14 | INFO  | Task 05c9d431-e79b-4b57-a053-94e6bbc24c1a is in state STARTED 2025-09-13 00:49:14.864599 | orchestrator | 2025-09-13 00:49:14 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:49:17.872136 | orchestrator | 2025-09-13 00:49:17 | INFO  | Task df3bcc9d-f941-4d95-83a5-1fffbd27f62d is in state STARTED 2025-09-13 00:49:17.872242 | orchestrator | 2025-09-13 00:49:17 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:49:17.874420 | orchestrator | 2025-09-13 00:49:17 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:49:17.874939 | orchestrator | 2025-09-13 00:49:17 | INFO  | Task 29d62ab4-14da-424d-99bd-d5beeec6e429 is in state STARTED 2025-09-13 00:49:17.875827 | orchestrator | 2025-09-13 00:49:17 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:49:17.877278 | orchestrator | 2025-09-13 00:49:17 | INFO  | Task 05c9d431-e79b-4b57-a053-94e6bbc24c1a is in state STARTED 2025-09-13 00:49:17.877301 | orchestrator | 2025-09-13 00:49:17 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:49:20.920825 | orchestrator | 2025-09-13 00:49:20 | INFO  | Task df3bcc9d-f941-4d95-83a5-1fffbd27f62d is in state STARTED 2025-09-13 00:49:20.923122 | orchestrator | 2025-09-13 00:49:20 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:49:20.927974 | orchestrator | 2025-09-13 00:49:20 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:49:20.931150 | orchestrator | 2025-09-13 00:49:20 | INFO  | Task 29d62ab4-14da-424d-99bd-d5beeec6e429 is in state STARTED 2025-09-13 00:49:20.932042 | orchestrator | 2025-09-13 00:49:20 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:49:20.937870 | orchestrator | 2025-09-13 00:49:20 | INFO  | Task 05c9d431-e79b-4b57-a053-94e6bbc24c1a is in state STARTED 2025-09-13 00:49:20.937895 | orchestrator | 2025-09-13 00:49:20 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:49:23.990273 | orchestrator | 2025-09-13 00:49:23 | INFO  | Task df3bcc9d-f941-4d95-83a5-1fffbd27f62d is in state STARTED 2025-09-13 00:49:23.992122 | orchestrator | 2025-09-13 00:49:23 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:49:23.992164 | orchestrator | 2025-09-13 00:49:23 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:49:23.992835 | orchestrator | 2025-09-13 00:49:23 | INFO  | Task 29d62ab4-14da-424d-99bd-d5beeec6e429 is in state STARTED 2025-09-13 00:49:23.993886 | orchestrator | 2025-09-13 00:49:23 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:49:23.994972 | orchestrator | 2025-09-13 00:49:23 | INFO  | Task 05c9d431-e79b-4b57-a053-94e6bbc24c1a is in state STARTED 2025-09-13 00:49:23.995459 | orchestrator | 2025-09-13 00:49:23 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:49:27.237409 | orchestrator | 2025-09-13 00:49:27 | INFO  | Task df3bcc9d-f941-4d95-83a5-1fffbd27f62d is in state STARTED 2025-09-13 00:49:27.237501 | orchestrator | 2025-09-13 00:49:27 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:49:27.237514 | orchestrator | 2025-09-13 00:49:27 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:49:27.237526 | orchestrator | 2025-09-13 00:49:27 | INFO  | Task 29d62ab4-14da-424d-99bd-d5beeec6e429 is in state STARTED 2025-09-13 00:49:27.237537 | orchestrator | 2025-09-13 00:49:27 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:49:27.237548 | orchestrator | 2025-09-13 00:49:27 | INFO  | Task 05c9d431-e79b-4b57-a053-94e6bbc24c1a is in state STARTED 2025-09-13 00:49:27.237559 | orchestrator | 2025-09-13 00:49:27 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:49:30.148161 | orchestrator | 2025-09-13 00:49:30 | INFO  | Task df3bcc9d-f941-4d95-83a5-1fffbd27f62d is in state STARTED 2025-09-13 00:49:30.148574 | orchestrator | 2025-09-13 00:49:30 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:49:30.153254 | orchestrator | 2025-09-13 00:49:30 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:49:30.153279 | orchestrator | 2025-09-13 00:49:30 | INFO  | Task 29d62ab4-14da-424d-99bd-d5beeec6e429 is in state STARTED 2025-09-13 00:49:30.153292 | orchestrator | 2025-09-13 00:49:30 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:49:30.157398 | orchestrator | 2025-09-13 00:49:30 | INFO  | Task 05c9d431-e79b-4b57-a053-94e6bbc24c1a is in state STARTED 2025-09-13 00:49:30.157420 | orchestrator | 2025-09-13 00:49:30 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:49:33.298771 | orchestrator | 2025-09-13 00:49:33 | INFO  | Task df3bcc9d-f941-4d95-83a5-1fffbd27f62d is in state STARTED 2025-09-13 00:49:33.302370 | orchestrator | 2025-09-13 00:49:33 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:49:33.304214 | orchestrator | 2025-09-13 00:49:33 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:49:33.304933 | orchestrator | 2025-09-13 00:49:33 | INFO  | Task 29d62ab4-14da-424d-99bd-d5beeec6e429 is in state STARTED 2025-09-13 00:49:33.306160 | orchestrator | 2025-09-13 00:49:33 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:49:33.308616 | orchestrator | 2025-09-13 00:49:33 | INFO  | Task 05c9d431-e79b-4b57-a053-94e6bbc24c1a is in state STARTED 2025-09-13 00:49:33.308656 | orchestrator | 2025-09-13 00:49:33 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:49:36.351429 | orchestrator | 2025-09-13 00:49:36 | INFO  | Task df3bcc9d-f941-4d95-83a5-1fffbd27f62d is in state STARTED 2025-09-13 00:49:36.351532 | orchestrator | 2025-09-13 00:49:36 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:49:36.351547 | orchestrator | 2025-09-13 00:49:36 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:49:36.351559 | orchestrator | 2025-09-13 00:49:36 | INFO  | Task 29d62ab4-14da-424d-99bd-d5beeec6e429 is in state STARTED 2025-09-13 00:49:36.351571 | orchestrator | 2025-09-13 00:49:36 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:49:36.353086 | orchestrator | 2025-09-13 00:49:36 | INFO  | Task 05c9d431-e79b-4b57-a053-94e6bbc24c1a is in state STARTED 2025-09-13 00:49:36.353162 | orchestrator | 2025-09-13 00:49:36 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:49:39.607031 | orchestrator | 2025-09-13 00:49:39 | INFO  | Task df3bcc9d-f941-4d95-83a5-1fffbd27f62d is in state STARTED 2025-09-13 00:49:39.608040 | orchestrator | 2025-09-13 00:49:39 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:49:39.608874 | orchestrator | 2025-09-13 00:49:39 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:49:39.609614 | orchestrator | 2025-09-13 00:49:39 | INFO  | Task 29d62ab4-14da-424d-99bd-d5beeec6e429 is in state STARTED 2025-09-13 00:49:39.610396 | orchestrator | 2025-09-13 00:49:39 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:49:39.610830 | orchestrator | 2025-09-13 00:49:39 | INFO  | Task 05c9d431-e79b-4b57-a053-94e6bbc24c1a is in state SUCCESS 2025-09-13 00:49:39.611019 | orchestrator | 2025-09-13 00:49:39 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:49:42.650437 | orchestrator | 2025-09-13 00:49:42 | INFO  | Task df3bcc9d-f941-4d95-83a5-1fffbd27f62d is in state STARTED 2025-09-13 00:49:42.650642 | orchestrator | 2025-09-13 00:49:42 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:49:42.651560 | orchestrator | 2025-09-13 00:49:42 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:49:42.652230 | orchestrator | 2025-09-13 00:49:42 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:49:42.653037 | orchestrator | 2025-09-13 00:49:42 | INFO  | Task 29d62ab4-14da-424d-99bd-d5beeec6e429 is in state SUCCESS 2025-09-13 00:49:42.654412 | orchestrator | 2025-09-13 00:49:42.654446 | orchestrator | 2025-09-13 00:49:42.654460 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-13 00:49:42.654474 | orchestrator | 2025-09-13 00:49:42.654488 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-13 00:49:42.654501 | orchestrator | Saturday 13 September 2025 00:49:18 +0000 (0:00:00.929) 0:00:00.929 **** 2025-09-13 00:49:42.654514 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:49:42.654527 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:49:42.654538 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:49:42.654549 | orchestrator | 2025-09-13 00:49:42.654560 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-13 00:49:42.654571 | orchestrator | Saturday 13 September 2025 00:49:19 +0000 (0:00:00.625) 0:00:01.555 **** 2025-09-13 00:49:42.654608 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-09-13 00:49:42.654621 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-09-13 00:49:42.654632 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-09-13 00:49:42.654643 | orchestrator | 2025-09-13 00:49:42.654654 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-09-13 00:49:42.654665 | orchestrator | 2025-09-13 00:49:42.654676 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-09-13 00:49:42.654687 | orchestrator | Saturday 13 September 2025 00:49:20 +0000 (0:00:01.266) 0:00:02.821 **** 2025-09-13 00:49:42.654698 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:49:42.654709 | orchestrator | 2025-09-13 00:49:42.654720 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-09-13 00:49:42.654732 | orchestrator | Saturday 13 September 2025 00:49:21 +0000 (0:00:00.919) 0:00:03.741 **** 2025-09-13 00:49:42.654744 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-13 00:49:42.654755 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-13 00:49:42.654766 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-13 00:49:42.654777 | orchestrator | 2025-09-13 00:49:42.654803 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-09-13 00:49:42.654815 | orchestrator | Saturday 13 September 2025 00:49:23 +0000 (0:00:01.692) 0:00:05.433 **** 2025-09-13 00:49:42.654826 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-13 00:49:42.654838 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-13 00:49:42.654849 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-13 00:49:42.654860 | orchestrator | 2025-09-13 00:49:42.654872 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-09-13 00:49:42.654883 | orchestrator | Saturday 13 September 2025 00:49:25 +0000 (0:00:02.520) 0:00:07.954 **** 2025-09-13 00:49:42.654894 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:49:42.654906 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:49:42.654917 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:49:42.654928 | orchestrator | 2025-09-13 00:49:42.654940 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-09-13 00:49:42.654951 | orchestrator | Saturday 13 September 2025 00:49:29 +0000 (0:00:03.366) 0:00:11.320 **** 2025-09-13 00:49:42.654963 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:49:42.654974 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:49:42.654985 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:49:42.654997 | orchestrator | 2025-09-13 00:49:42.655008 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:49:42.655020 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:49:42.655033 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:49:42.655044 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:49:42.655056 | orchestrator | 2025-09-13 00:49:42.655067 | orchestrator | 2025-09-13 00:49:42.655078 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:49:42.655090 | orchestrator | Saturday 13 September 2025 00:49:36 +0000 (0:00:07.580) 0:00:18.900 **** 2025-09-13 00:49:42.655125 | orchestrator | =============================================================================== 2025-09-13 00:49:42.655137 | orchestrator | memcached : Restart memcached container --------------------------------- 7.58s 2025-09-13 00:49:42.655148 | orchestrator | memcached : Check memcached container ----------------------------------- 3.37s 2025-09-13 00:49:42.655167 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.52s 2025-09-13 00:49:42.655179 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.69s 2025-09-13 00:49:42.655190 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.27s 2025-09-13 00:49:42.655200 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.92s 2025-09-13 00:49:42.655212 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.63s 2025-09-13 00:49:42.655223 | orchestrator | 2025-09-13 00:49:42.655234 | orchestrator | 2025-09-13 00:49:42.655245 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-13 00:49:42.655256 | orchestrator | 2025-09-13 00:49:42.655266 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-13 00:49:42.655277 | orchestrator | Saturday 13 September 2025 00:49:17 +0000 (0:00:00.526) 0:00:00.526 **** 2025-09-13 00:49:42.655288 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:49:42.655300 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:49:42.655311 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:49:42.655322 | orchestrator | 2025-09-13 00:49:42.655333 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-13 00:49:42.655355 | orchestrator | Saturday 13 September 2025 00:49:18 +0000 (0:00:00.591) 0:00:01.118 **** 2025-09-13 00:49:42.655367 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-09-13 00:49:42.655378 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-09-13 00:49:42.655389 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-09-13 00:49:42.655400 | orchestrator | 2025-09-13 00:49:42.655411 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-09-13 00:49:42.655422 | orchestrator | 2025-09-13 00:49:42.655433 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-09-13 00:49:42.655444 | orchestrator | Saturday 13 September 2025 00:49:19 +0000 (0:00:00.973) 0:00:02.091 **** 2025-09-13 00:49:42.655455 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:49:42.655466 | orchestrator | 2025-09-13 00:49:42.655477 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-09-13 00:49:42.655488 | orchestrator | Saturday 13 September 2025 00:49:19 +0000 (0:00:00.901) 0:00:02.993 **** 2025-09-13 00:49:42.655502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-13 00:49:42.655526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-13 00:49:42.655539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-13 00:49:42.655558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-13 00:49:42.655570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-13 00:49:42.655588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-13 00:49:42.655600 | orchestrator | 2025-09-13 00:49:42.655611 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-09-13 00:49:42.655622 | orchestrator | Saturday 13 September 2025 00:49:21 +0000 (0:00:02.003) 0:00:04.997 **** 2025-09-13 00:49:42.655634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-13 00:49:42.655651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-13 00:49:42.655662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-13 00:49:42.655680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-13 00:49:42.655692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-13 00:49:42.655709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-13 00:49:42.655720 | orchestrator | 2025-09-13 00:49:42.655731 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-09-13 00:49:42.655742 | orchestrator | Saturday 13 September 2025 00:49:25 +0000 (0:00:03.921) 0:00:08.919 **** 2025-09-13 00:49:42.655754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-13 00:49:42.655765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-13 00:49:42.655782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-13 00:49:42.655800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-13 00:49:42.655812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-13 00:49:42.655823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-13 00:49:42.655834 | orchestrator | 2025-09-13 00:49:42.655851 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-09-13 00:49:42.655863 | orchestrator | Saturday 13 September 2025 00:49:29 +0000 (0:00:04.167) 0:00:13.086 **** 2025-09-13 00:49:42.655874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-13 00:49:42.655886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-13 00:49:42.655903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-13 00:49:42.655921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-13 00:49:42.655932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-13 00:49:42.655944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-13 00:49:42.655955 | orchestrator | 2025-09-13 00:49:42.655966 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-13 00:49:42.655977 | orchestrator | Saturday 13 September 2025 00:49:31 +0000 (0:00:01.897) 0:00:14.984 **** 2025-09-13 00:49:42.655988 | orchestrator | 2025-09-13 00:49:42.655999 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-13 00:49:42.656015 | orchestrator | Saturday 13 September 2025 00:49:32 +0000 (0:00:00.183) 0:00:15.167 **** 2025-09-13 00:49:42.656026 | orchestrator | 2025-09-13 00:49:42.656037 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-13 00:49:42.656048 | orchestrator | Saturday 13 September 2025 00:49:32 +0000 (0:00:00.114) 0:00:15.282 **** 2025-09-13 00:49:42.656059 | orchestrator | 2025-09-13 00:49:42.656070 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-09-13 00:49:42.656080 | orchestrator | Saturday 13 September 2025 00:49:32 +0000 (0:00:00.071) 0:00:15.354 **** 2025-09-13 00:49:42.656091 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:49:42.656119 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:49:42.656131 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:49:42.656141 | orchestrator | 2025-09-13 00:49:42.656152 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-09-13 00:49:42.656163 | orchestrator | Saturday 13 September 2025 00:49:36 +0000 (0:00:04.051) 0:00:19.406 **** 2025-09-13 00:49:42.656174 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:49:42.656185 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:49:42.656195 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:49:42.656206 | orchestrator | 2025-09-13 00:49:42.656217 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:49:42.656235 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:49:42.656247 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:49:42.656258 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:49:42.656269 | orchestrator | 2025-09-13 00:49:42.656280 | orchestrator | 2025-09-13 00:49:42.656291 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:49:42.656301 | orchestrator | Saturday 13 September 2025 00:49:41 +0000 (0:00:05.606) 0:00:25.013 **** 2025-09-13 00:49:42.656313 | orchestrator | =============================================================================== 2025-09-13 00:49:42.656323 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 5.61s 2025-09-13 00:49:42.656334 | orchestrator | redis : Copying over redis config files --------------------------------- 4.17s 2025-09-13 00:49:42.656345 | orchestrator | redis : Restart redis container ----------------------------------------- 4.05s 2025-09-13 00:49:42.656356 | orchestrator | redis : Copying over default config.json files -------------------------- 3.92s 2025-09-13 00:49:42.656367 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.01s 2025-09-13 00:49:42.656378 | orchestrator | redis : Check redis containers ------------------------------------------ 1.90s 2025-09-13 00:49:42.656388 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.97s 2025-09-13 00:49:42.656399 | orchestrator | redis : include_tasks --------------------------------------------------- 0.90s 2025-09-13 00:49:42.656410 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.59s 2025-09-13 00:49:42.656421 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.37s 2025-09-13 00:49:42.658320 | orchestrator | 2025-09-13 00:49:42 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:49:42.658343 | orchestrator | 2025-09-13 00:49:42 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:49:45.711357 | orchestrator | 2025-09-13 00:49:45 | INFO  | Task df3bcc9d-f941-4d95-83a5-1fffbd27f62d is in state STARTED 2025-09-13 00:49:45.712870 | orchestrator | 2025-09-13 00:49:45 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:49:45.716619 | orchestrator | 2025-09-13 00:49:45 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:49:45.717686 | orchestrator | 2025-09-13 00:49:45 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:49:45.719879 | orchestrator | 2025-09-13 00:49:45 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:49:45.719903 | orchestrator | 2025-09-13 00:49:45 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:49:48.767579 | orchestrator | 2025-09-13 00:49:48 | INFO  | Task df3bcc9d-f941-4d95-83a5-1fffbd27f62d is in state STARTED 2025-09-13 00:49:48.768157 | orchestrator | 2025-09-13 00:49:48 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:49:48.768788 | orchestrator | 2025-09-13 00:49:48 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:49:48.769948 | orchestrator | 2025-09-13 00:49:48 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:49:48.771018 | orchestrator | 2025-09-13 00:49:48 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:49:48.771360 | orchestrator | 2025-09-13 00:49:48 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:49:51.794928 | orchestrator | 2025-09-13 00:49:51 | INFO  | Task df3bcc9d-f941-4d95-83a5-1fffbd27f62d is in state STARTED 2025-09-13 00:49:51.797827 | orchestrator | 2025-09-13 00:49:51 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:49:51.800473 | orchestrator | 2025-09-13 00:49:51 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:49:51.809059 | orchestrator | 2025-09-13 00:49:51 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:49:51.809898 | orchestrator | 2025-09-13 00:49:51 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:49:51.810204 | orchestrator | 2025-09-13 00:49:51 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:49:54.899352 | orchestrator | 2025-09-13 00:49:54 | INFO  | Task df3bcc9d-f941-4d95-83a5-1fffbd27f62d is in state STARTED 2025-09-13 00:49:54.900159 | orchestrator | 2025-09-13 00:49:54 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:49:54.900191 | orchestrator | 2025-09-13 00:49:54 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:49:54.900206 | orchestrator | 2025-09-13 00:49:54 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:49:54.900219 | orchestrator | 2025-09-13 00:49:54 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:49:54.900232 | orchestrator | 2025-09-13 00:49:54 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:49:58.160375 | orchestrator | 2025-09-13 00:49:58 | INFO  | Task df3bcc9d-f941-4d95-83a5-1fffbd27f62d is in state STARTED 2025-09-13 00:49:58.208770 | orchestrator | 2025-09-13 00:49:58 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:49:58.261843 | orchestrator | 2025-09-13 00:49:58 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:49:58.279144 | orchestrator | 2025-09-13 00:49:58 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:49:58.300455 | orchestrator | 2025-09-13 00:49:58 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:49:58.300486 | orchestrator | 2025-09-13 00:49:58 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:50:01.917857 | orchestrator | 2025-09-13 00:50:01 | INFO  | Task df3bcc9d-f941-4d95-83a5-1fffbd27f62d is in state STARTED 2025-09-13 00:50:01.917953 | orchestrator | 2025-09-13 00:50:01 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:50:01.917967 | orchestrator | 2025-09-13 00:50:01 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:50:01.917978 | orchestrator | 2025-09-13 00:50:01 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:50:01.917989 | orchestrator | 2025-09-13 00:50:01 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:50:01.918001 | orchestrator | 2025-09-13 00:50:01 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:50:04.932348 | orchestrator | 2025-09-13 00:50:04 | INFO  | Task df3bcc9d-f941-4d95-83a5-1fffbd27f62d is in state STARTED 2025-09-13 00:50:04.932461 | orchestrator | 2025-09-13 00:50:04 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:50:04.932974 | orchestrator | 2025-09-13 00:50:04 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:50:04.933581 | orchestrator | 2025-09-13 00:50:04 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:50:04.934247 | orchestrator | 2025-09-13 00:50:04 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:50:04.934275 | orchestrator | 2025-09-13 00:50:04 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:50:08.036977 | orchestrator | 2025-09-13 00:50:08 | INFO  | Task df3bcc9d-f941-4d95-83a5-1fffbd27f62d is in state STARTED 2025-09-13 00:50:08.037171 | orchestrator | 2025-09-13 00:50:08 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:50:08.037739 | orchestrator | 2025-09-13 00:50:08 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state STARTED 2025-09-13 00:50:08.038302 | orchestrator | 2025-09-13 00:50:08 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:50:08.038846 | orchestrator | 2025-09-13 00:50:08 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:50:08.041673 | orchestrator | 2025-09-13 00:50:08 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:50:11.125460 | orchestrator | 2025-09-13 00:50:11.125715 | orchestrator | 2025-09-13 00:50:11.125733 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-09-13 00:50:11.125745 | orchestrator | 2025-09-13 00:50:11.125755 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-09-13 00:50:11.125765 | orchestrator | Saturday 13 September 2025 00:46:29 +0000 (0:00:00.210) 0:00:00.210 **** 2025-09-13 00:50:11.125775 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:50:11.125785 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:50:11.125795 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:50:11.125805 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:50:11.125814 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:50:11.125824 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:50:11.125834 | orchestrator | 2025-09-13 00:50:11.125844 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-09-13 00:50:11.125854 | orchestrator | Saturday 13 September 2025 00:46:29 +0000 (0:00:00.785) 0:00:00.996 **** 2025-09-13 00:50:11.125864 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:50:11.125875 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:50:11.125884 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:50:11.125894 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.125904 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:50:11.125913 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:50:11.125923 | orchestrator | 2025-09-13 00:50:11.125933 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-09-13 00:50:11.125942 | orchestrator | Saturday 13 September 2025 00:46:30 +0000 (0:00:00.611) 0:00:01.607 **** 2025-09-13 00:50:11.125952 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:50:11.125961 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:50:11.125971 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:50:11.125981 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.125990 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:50:11.126000 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:50:11.126010 | orchestrator | 2025-09-13 00:50:11.126064 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-09-13 00:50:11.126075 | orchestrator | Saturday 13 September 2025 00:46:31 +0000 (0:00:00.712) 0:00:02.320 **** 2025-09-13 00:50:11.126085 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:50:11.126094 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:50:11.126104 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:50:11.126134 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:50:11.126157 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:50:11.126167 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:50:11.126177 | orchestrator | 2025-09-13 00:50:11.126187 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-09-13 00:50:11.126197 | orchestrator | Saturday 13 September 2025 00:46:33 +0000 (0:00:02.607) 0:00:04.927 **** 2025-09-13 00:50:11.126224 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:50:11.126234 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:50:11.126244 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:50:11.126254 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:50:11.126263 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:50:11.126273 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:50:11.126283 | orchestrator | 2025-09-13 00:50:11.126292 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-09-13 00:50:11.126302 | orchestrator | Saturday 13 September 2025 00:46:35 +0000 (0:00:01.187) 0:00:06.115 **** 2025-09-13 00:50:11.126312 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:50:11.126321 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:50:11.126331 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:50:11.126341 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:50:11.126351 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:50:11.126360 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:50:11.126370 | orchestrator | 2025-09-13 00:50:11.126380 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-09-13 00:50:11.126390 | orchestrator | Saturday 13 September 2025 00:46:36 +0000 (0:00:01.508) 0:00:07.623 **** 2025-09-13 00:50:11.126399 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:50:11.126409 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:50:11.126419 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:50:11.126428 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.126438 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:50:11.126448 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:50:11.126457 | orchestrator | 2025-09-13 00:50:11.126467 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-09-13 00:50:11.126477 | orchestrator | Saturday 13 September 2025 00:46:37 +0000 (0:00:00.671) 0:00:08.295 **** 2025-09-13 00:50:11.126487 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:50:11.126496 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:50:11.126506 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:50:11.126515 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.126525 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:50:11.126535 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:50:11.126544 | orchestrator | 2025-09-13 00:50:11.126554 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-09-13 00:50:11.126564 | orchestrator | Saturday 13 September 2025 00:46:38 +0000 (0:00:01.124) 0:00:09.419 **** 2025-09-13 00:50:11.126574 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-13 00:50:11.126583 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-13 00:50:11.126593 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:50:11.126603 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-13 00:50:11.126613 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-13 00:50:11.126623 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:50:11.126632 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-13 00:50:11.126642 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-13 00:50:11.126651 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:50:11.126661 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-13 00:50:11.126686 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-13 00:50:11.126696 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.126706 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-13 00:50:11.126715 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-13 00:50:11.126725 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:50:11.126742 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-13 00:50:11.126752 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-13 00:50:11.126761 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:50:11.126771 | orchestrator | 2025-09-13 00:50:11.126781 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-09-13 00:50:11.126790 | orchestrator | Saturday 13 September 2025 00:46:39 +0000 (0:00:01.301) 0:00:10.721 **** 2025-09-13 00:50:11.126800 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:50:11.126810 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:50:11.126819 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:50:11.126829 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.126839 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:50:11.126848 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:50:11.126858 | orchestrator | 2025-09-13 00:50:11.126867 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-09-13 00:50:11.126878 | orchestrator | Saturday 13 September 2025 00:46:41 +0000 (0:00:02.005) 0:00:12.726 **** 2025-09-13 00:50:11.126888 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:50:11.126897 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:50:11.126907 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:50:11.126916 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:50:11.126926 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:50:11.126936 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:50:11.126945 | orchestrator | 2025-09-13 00:50:11.126955 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-09-13 00:50:11.126964 | orchestrator | Saturday 13 September 2025 00:46:43 +0000 (0:00:01.733) 0:00:14.459 **** 2025-09-13 00:50:11.126974 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:50:11.126984 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:50:11.126993 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:50:11.127007 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:50:11.127017 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:50:11.127027 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:50:11.127037 | orchestrator | 2025-09-13 00:50:11.127046 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-09-13 00:50:11.127056 | orchestrator | Saturday 13 September 2025 00:46:50 +0000 (0:00:06.671) 0:00:21.130 **** 2025-09-13 00:50:11.127066 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:50:11.127075 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:50:11.127085 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:50:11.127095 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.127104 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:50:11.127129 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:50:11.127139 | orchestrator | 2025-09-13 00:50:11.127149 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-09-13 00:50:11.127159 | orchestrator | Saturday 13 September 2025 00:46:51 +0000 (0:00:01.849) 0:00:22.980 **** 2025-09-13 00:50:11.127169 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:50:11.127178 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:50:11.127188 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:50:11.127198 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.127207 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:50:11.127217 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:50:11.127226 | orchestrator | 2025-09-13 00:50:11.127236 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-09-13 00:50:11.127248 | orchestrator | Saturday 13 September 2025 00:46:55 +0000 (0:00:03.083) 0:00:26.063 **** 2025-09-13 00:50:11.127257 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:50:11.127267 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:50:11.127277 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:50:11.127286 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:50:11.127302 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:50:11.127312 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:50:11.127322 | orchestrator | 2025-09-13 00:50:11.127331 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-09-13 00:50:11.127341 | orchestrator | Saturday 13 September 2025 00:46:56 +0000 (0:00:01.857) 0:00:27.921 **** 2025-09-13 00:50:11.127351 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-09-13 00:50:11.127361 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-09-13 00:50:11.127371 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-09-13 00:50:11.127380 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-09-13 00:50:11.127390 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-09-13 00:50:11.127400 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-09-13 00:50:11.127409 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-09-13 00:50:11.127419 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-09-13 00:50:11.127428 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-09-13 00:50:11.127438 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-09-13 00:50:11.127448 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-09-13 00:50:11.127457 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-09-13 00:50:11.127467 | orchestrator | 2025-09-13 00:50:11.127477 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-09-13 00:50:11.127487 | orchestrator | Saturday 13 September 2025 00:46:59 +0000 (0:00:03.043) 0:00:30.964 **** 2025-09-13 00:50:11.127496 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:50:11.127506 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:50:11.127516 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:50:11.127525 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:50:11.127535 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:50:11.127545 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:50:11.127555 | orchestrator | 2025-09-13 00:50:11.127572 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-09-13 00:50:11.127582 | orchestrator | 2025-09-13 00:50:11.127592 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-09-13 00:50:11.127602 | orchestrator | Saturday 13 September 2025 00:47:01 +0000 (0:00:01.825) 0:00:32.789 **** 2025-09-13 00:50:11.127612 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:50:11.127622 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:50:11.127631 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:50:11.127641 | orchestrator | 2025-09-13 00:50:11.127651 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-09-13 00:50:11.127661 | orchestrator | Saturday 13 September 2025 00:47:02 +0000 (0:00:01.011) 0:00:33.800 **** 2025-09-13 00:50:11.127671 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:50:11.127680 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:50:11.127690 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:50:11.127700 | orchestrator | 2025-09-13 00:50:11.127710 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-09-13 00:50:11.127719 | orchestrator | Saturday 13 September 2025 00:47:04 +0000 (0:00:01.353) 0:00:35.154 **** 2025-09-13 00:50:11.127729 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:50:11.127739 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:50:11.127748 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:50:11.127758 | orchestrator | 2025-09-13 00:50:11.127768 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-09-13 00:50:11.127778 | orchestrator | Saturday 13 September 2025 00:47:05 +0000 (0:00:00.945) 0:00:36.099 **** 2025-09-13 00:50:11.127787 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:50:11.127797 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:50:11.127807 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:50:11.127817 | orchestrator | 2025-09-13 00:50:11.127826 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-09-13 00:50:11.127836 | orchestrator | Saturday 13 September 2025 00:47:05 +0000 (0:00:00.909) 0:00:37.009 **** 2025-09-13 00:50:11.127851 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.127861 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:50:11.127871 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:50:11.127881 | orchestrator | 2025-09-13 00:50:11.127890 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-09-13 00:50:11.127900 | orchestrator | Saturday 13 September 2025 00:47:06 +0000 (0:00:00.915) 0:00:37.924 **** 2025-09-13 00:50:11.127914 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:50:11.127924 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:50:11.127934 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:50:11.127943 | orchestrator | 2025-09-13 00:50:11.127953 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-09-13 00:50:11.127963 | orchestrator | Saturday 13 September 2025 00:47:07 +0000 (0:00:01.046) 0:00:38.970 **** 2025-09-13 00:50:11.127973 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:50:11.127983 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:50:11.127992 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:50:11.128002 | orchestrator | 2025-09-13 00:50:11.128012 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-09-13 00:50:11.128022 | orchestrator | Saturday 13 September 2025 00:47:10 +0000 (0:00:02.189) 0:00:41.160 **** 2025-09-13 00:50:11.128032 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:50:11.128041 | orchestrator | 2025-09-13 00:50:11.128051 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-09-13 00:50:11.128061 | orchestrator | Saturday 13 September 2025 00:47:10 +0000 (0:00:00.702) 0:00:41.862 **** 2025-09-13 00:50:11.128071 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:50:11.128080 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:50:11.128090 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:50:11.128100 | orchestrator | 2025-09-13 00:50:11.128110 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-09-13 00:50:11.128133 | orchestrator | Saturday 13 September 2025 00:47:12 +0000 (0:00:02.050) 0:00:43.913 **** 2025-09-13 00:50:11.128143 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:50:11.128152 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:50:11.128162 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:50:11.128172 | orchestrator | 2025-09-13 00:50:11.128181 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-09-13 00:50:11.128191 | orchestrator | Saturday 13 September 2025 00:47:13 +0000 (0:00:00.758) 0:00:44.671 **** 2025-09-13 00:50:11.128200 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:50:11.128210 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:50:11.128219 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:50:11.128229 | orchestrator | 2025-09-13 00:50:11.128239 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-09-13 00:50:11.128248 | orchestrator | Saturday 13 September 2025 00:47:14 +0000 (0:00:01.091) 0:00:45.763 **** 2025-09-13 00:50:11.128258 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:50:11.128267 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:50:11.128277 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:50:11.128286 | orchestrator | 2025-09-13 00:50:11.128296 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-09-13 00:50:11.128305 | orchestrator | Saturday 13 September 2025 00:47:16 +0000 (0:00:01.919) 0:00:47.682 **** 2025-09-13 00:50:11.128315 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.128325 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:50:11.128334 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:50:11.128344 | orchestrator | 2025-09-13 00:50:11.128353 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-09-13 00:50:11.128363 | orchestrator | Saturday 13 September 2025 00:47:17 +0000 (0:00:00.668) 0:00:48.351 **** 2025-09-13 00:50:11.128373 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.128389 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:50:11.128399 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:50:11.128408 | orchestrator | 2025-09-13 00:50:11.128418 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-09-13 00:50:11.128427 | orchestrator | Saturday 13 September 2025 00:47:17 +0000 (0:00:00.432) 0:00:48.784 **** 2025-09-13 00:50:11.128437 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:50:11.128447 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:50:11.128457 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:50:11.128467 | orchestrator | 2025-09-13 00:50:11.128482 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-09-13 00:50:11.128492 | orchestrator | Saturday 13 September 2025 00:47:21 +0000 (0:00:03.474) 0:00:52.258 **** 2025-09-13 00:50:11.128502 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-13 00:50:11.128513 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-13 00:50:11.128523 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-13 00:50:11.128532 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-13 00:50:11.128542 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-13 00:50:11.128551 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-13 00:50:11.128561 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-13 00:50:11.128571 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-13 00:50:11.128580 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-13 00:50:11.128594 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-13 00:50:11.128604 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-13 00:50:11.128613 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-13 00:50:11.128623 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:50:11.128632 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:50:11.128642 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:50:11.128652 | orchestrator | 2025-09-13 00:50:11.128662 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-09-13 00:50:11.128671 | orchestrator | Saturday 13 September 2025 00:48:05 +0000 (0:00:44.686) 0:01:36.945 **** 2025-09-13 00:50:11.128681 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.128690 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:50:11.128700 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:50:11.128709 | orchestrator | 2025-09-13 00:50:11.128719 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-09-13 00:50:11.128729 | orchestrator | Saturday 13 September 2025 00:48:06 +0000 (0:00:00.244) 0:01:37.190 **** 2025-09-13 00:50:11.128738 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:50:11.128748 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:50:11.128757 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:50:11.128767 | orchestrator | 2025-09-13 00:50:11.128776 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-09-13 00:50:11.128792 | orchestrator | Saturday 13 September 2025 00:48:07 +0000 (0:00:00.961) 0:01:38.152 **** 2025-09-13 00:50:11.128801 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:50:11.128811 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:50:11.128820 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:50:11.128830 | orchestrator | 2025-09-13 00:50:11.128839 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-09-13 00:50:11.128849 | orchestrator | Saturday 13 September 2025 00:48:08 +0000 (0:00:01.244) 0:01:39.396 **** 2025-09-13 00:50:11.128859 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:50:11.128868 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:50:11.128878 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:50:11.128887 | orchestrator | 2025-09-13 00:50:11.128897 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-09-13 00:50:11.128907 | orchestrator | Saturday 13 September 2025 00:48:31 +0000 (0:00:22.933) 0:02:02.329 **** 2025-09-13 00:50:11.128916 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:50:11.128926 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:50:11.128936 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:50:11.128945 | orchestrator | 2025-09-13 00:50:11.128955 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-09-13 00:50:11.128964 | orchestrator | Saturday 13 September 2025 00:48:32 +0000 (0:00:00.721) 0:02:03.051 **** 2025-09-13 00:50:11.128974 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:50:11.128984 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:50:11.128993 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:50:11.129003 | orchestrator | 2025-09-13 00:50:11.129013 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-09-13 00:50:11.129022 | orchestrator | Saturday 13 September 2025 00:48:32 +0000 (0:00:00.627) 0:02:03.678 **** 2025-09-13 00:50:11.129032 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:50:11.129041 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:50:11.129051 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:50:11.129061 | orchestrator | 2025-09-13 00:50:11.129070 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-09-13 00:50:11.129080 | orchestrator | Saturday 13 September 2025 00:48:33 +0000 (0:00:00.625) 0:02:04.304 **** 2025-09-13 00:50:11.129090 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:50:11.129104 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:50:11.129143 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:50:11.129161 | orchestrator | 2025-09-13 00:50:11.129178 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-09-13 00:50:11.129196 | orchestrator | Saturday 13 September 2025 00:48:34 +0000 (0:00:00.938) 0:02:05.243 **** 2025-09-13 00:50:11.129212 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:50:11.129228 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:50:11.129242 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:50:11.129252 | orchestrator | 2025-09-13 00:50:11.129261 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-09-13 00:50:11.129271 | orchestrator | Saturday 13 September 2025 00:48:34 +0000 (0:00:00.318) 0:02:05.562 **** 2025-09-13 00:50:11.129280 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:50:11.129290 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:50:11.129299 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:50:11.129309 | orchestrator | 2025-09-13 00:50:11.129318 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-09-13 00:50:11.129328 | orchestrator | Saturday 13 September 2025 00:48:35 +0000 (0:00:00.683) 0:02:06.246 **** 2025-09-13 00:50:11.129337 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:50:11.129347 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:50:11.129356 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:50:11.129366 | orchestrator | 2025-09-13 00:50:11.129375 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-09-13 00:50:11.129385 | orchestrator | Saturday 13 September 2025 00:48:35 +0000 (0:00:00.650) 0:02:06.896 **** 2025-09-13 00:50:11.129403 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:50:11.129413 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:50:11.129422 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:50:11.129432 | orchestrator | 2025-09-13 00:50:11.129441 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-09-13 00:50:11.129451 | orchestrator | Saturday 13 September 2025 00:48:36 +0000 (0:00:00.971) 0:02:07.867 **** 2025-09-13 00:50:11.129460 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:50:11.129470 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:50:11.129480 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:50:11.129489 | orchestrator | 2025-09-13 00:50:11.129499 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-09-13 00:50:11.129513 | orchestrator | Saturday 13 September 2025 00:48:37 +0000 (0:00:00.766) 0:02:08.634 **** 2025-09-13 00:50:11.129523 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.129533 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:50:11.129542 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:50:11.129551 | orchestrator | 2025-09-13 00:50:11.129561 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-09-13 00:50:11.129570 | orchestrator | Saturday 13 September 2025 00:48:37 +0000 (0:00:00.267) 0:02:08.901 **** 2025-09-13 00:50:11.129580 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.129590 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:50:11.129599 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:50:11.129609 | orchestrator | 2025-09-13 00:50:11.129618 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-09-13 00:50:11.129628 | orchestrator | Saturday 13 September 2025 00:48:38 +0000 (0:00:00.246) 0:02:09.148 **** 2025-09-13 00:50:11.129638 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:50:11.129647 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:50:11.129657 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:50:11.129666 | orchestrator | 2025-09-13 00:50:11.129676 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-09-13 00:50:11.129685 | orchestrator | Saturday 13 September 2025 00:48:38 +0000 (0:00:00.715) 0:02:09.864 **** 2025-09-13 00:50:11.129695 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:50:11.129704 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:50:11.129714 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:50:11.129724 | orchestrator | 2025-09-13 00:50:11.129733 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-09-13 00:50:11.129743 | orchestrator | Saturday 13 September 2025 00:48:39 +0000 (0:00:00.599) 0:02:10.463 **** 2025-09-13 00:50:11.129752 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-13 00:50:11.129762 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-13 00:50:11.129772 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-13 00:50:11.129781 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-13 00:50:11.129791 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-13 00:50:11.129800 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-13 00:50:11.129810 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-13 00:50:11.129819 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-13 00:50:11.129829 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-13 00:50:11.129838 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-09-13 00:50:11.129848 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-13 00:50:11.129863 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-13 00:50:11.129872 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-09-13 00:50:11.129888 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-13 00:50:11.129899 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-13 00:50:11.129908 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-13 00:50:11.129918 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-13 00:50:11.129928 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-13 00:50:11.129938 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-13 00:50:11.129947 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-13 00:50:11.129957 | orchestrator | 2025-09-13 00:50:11.129966 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-09-13 00:50:11.129975 | orchestrator | 2025-09-13 00:50:11.129985 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-09-13 00:50:11.129995 | orchestrator | Saturday 13 September 2025 00:48:42 +0000 (0:00:03.062) 0:02:13.526 **** 2025-09-13 00:50:11.130004 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:50:11.130014 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:50:11.130050 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:50:11.130060 | orchestrator | 2025-09-13 00:50:11.130070 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-09-13 00:50:11.130080 | orchestrator | Saturday 13 September 2025 00:48:42 +0000 (0:00:00.383) 0:02:13.909 **** 2025-09-13 00:50:11.130089 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:50:11.130099 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:50:11.130108 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:50:11.130133 | orchestrator | 2025-09-13 00:50:11.130143 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-09-13 00:50:11.130152 | orchestrator | Saturday 13 September 2025 00:48:43 +0000 (0:00:00.574) 0:02:14.484 **** 2025-09-13 00:50:11.130162 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:50:11.130171 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:50:11.130181 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:50:11.130190 | orchestrator | 2025-09-13 00:50:11.130200 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-09-13 00:50:11.130214 | orchestrator | Saturday 13 September 2025 00:48:43 +0000 (0:00:00.257) 0:02:14.741 **** 2025-09-13 00:50:11.130224 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:50:11.130233 | orchestrator | 2025-09-13 00:50:11.130243 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-09-13 00:50:11.130253 | orchestrator | Saturday 13 September 2025 00:48:44 +0000 (0:00:00.534) 0:02:15.275 **** 2025-09-13 00:50:11.130263 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:50:11.130272 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:50:11.130282 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:50:11.130291 | orchestrator | 2025-09-13 00:50:11.130301 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-09-13 00:50:11.130311 | orchestrator | Saturday 13 September 2025 00:48:44 +0000 (0:00:00.270) 0:02:15.546 **** 2025-09-13 00:50:11.130320 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:50:11.130330 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:50:11.130340 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:50:11.130349 | orchestrator | 2025-09-13 00:50:11.130359 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-09-13 00:50:11.130380 | orchestrator | Saturday 13 September 2025 00:48:44 +0000 (0:00:00.272) 0:02:15.819 **** 2025-09-13 00:50:11.130390 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:50:11.130400 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:50:11.130409 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:50:11.130419 | orchestrator | 2025-09-13 00:50:11.130428 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-09-13 00:50:11.130438 | orchestrator | Saturday 13 September 2025 00:48:45 +0000 (0:00:00.282) 0:02:16.102 **** 2025-09-13 00:50:11.130448 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:50:11.130457 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:50:11.130467 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:50:11.130476 | orchestrator | 2025-09-13 00:50:11.130486 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-09-13 00:50:11.130496 | orchestrator | Saturday 13 September 2025 00:48:45 +0000 (0:00:00.687) 0:02:16.790 **** 2025-09-13 00:50:11.130505 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:50:11.130515 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:50:11.130524 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:50:11.130534 | orchestrator | 2025-09-13 00:50:11.130543 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-09-13 00:50:11.130553 | orchestrator | Saturday 13 September 2025 00:48:46 +0000 (0:00:01.112) 0:02:17.902 **** 2025-09-13 00:50:11.130563 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:50:11.130572 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:50:11.130582 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:50:11.130591 | orchestrator | 2025-09-13 00:50:11.130601 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-09-13 00:50:11.130611 | orchestrator | Saturday 13 September 2025 00:48:48 +0000 (0:00:01.149) 0:02:19.051 **** 2025-09-13 00:50:11.130620 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:50:11.130630 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:50:11.130640 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:50:11.130649 | orchestrator | 2025-09-13 00:50:11.130659 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-13 00:50:11.130668 | orchestrator | 2025-09-13 00:50:11.130678 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-13 00:50:11.130688 | orchestrator | Saturday 13 September 2025 00:48:59 +0000 (0:00:11.942) 0:02:30.994 **** 2025-09-13 00:50:11.130698 | orchestrator | ok: [testbed-manager] 2025-09-13 00:50:11.130707 | orchestrator | 2025-09-13 00:50:11.130717 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-13 00:50:11.130727 | orchestrator | Saturday 13 September 2025 00:49:00 +0000 (0:00:00.826) 0:02:31.820 **** 2025-09-13 00:50:11.130743 | orchestrator | changed: [testbed-manager] 2025-09-13 00:50:11.130753 | orchestrator | 2025-09-13 00:50:11.130762 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-13 00:50:11.130772 | orchestrator | Saturday 13 September 2025 00:49:01 +0000 (0:00:00.584) 0:02:32.404 **** 2025-09-13 00:50:11.130782 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-13 00:50:11.130792 | orchestrator | 2025-09-13 00:50:11.130801 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-13 00:50:11.130811 | orchestrator | Saturday 13 September 2025 00:49:01 +0000 (0:00:00.549) 0:02:32.954 **** 2025-09-13 00:50:11.130821 | orchestrator | changed: [testbed-manager] 2025-09-13 00:50:11.130830 | orchestrator | 2025-09-13 00:50:11.130840 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-13 00:50:11.130849 | orchestrator | Saturday 13 September 2025 00:49:02 +0000 (0:00:00.843) 0:02:33.798 **** 2025-09-13 00:50:11.130859 | orchestrator | changed: [testbed-manager] 2025-09-13 00:50:11.130869 | orchestrator | 2025-09-13 00:50:11.130878 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-13 00:50:11.130888 | orchestrator | Saturday 13 September 2025 00:49:03 +0000 (0:00:00.645) 0:02:34.444 **** 2025-09-13 00:50:11.130903 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-13 00:50:11.130913 | orchestrator | 2025-09-13 00:50:11.130923 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-13 00:50:11.130933 | orchestrator | Saturday 13 September 2025 00:49:04 +0000 (0:00:01.552) 0:02:35.996 **** 2025-09-13 00:50:11.130942 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-13 00:50:11.130952 | orchestrator | 2025-09-13 00:50:11.130962 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-13 00:50:11.130971 | orchestrator | Saturday 13 September 2025 00:49:05 +0000 (0:00:00.811) 0:02:36.808 **** 2025-09-13 00:50:11.130981 | orchestrator | changed: [testbed-manager] 2025-09-13 00:50:11.130991 | orchestrator | 2025-09-13 00:50:11.131000 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-13 00:50:11.131010 | orchestrator | Saturday 13 September 2025 00:49:06 +0000 (0:00:00.451) 0:02:37.259 **** 2025-09-13 00:50:11.131019 | orchestrator | changed: [testbed-manager] 2025-09-13 00:50:11.131029 | orchestrator | 2025-09-13 00:50:11.131043 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-09-13 00:50:11.131053 | orchestrator | 2025-09-13 00:50:11.131063 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-09-13 00:50:11.131072 | orchestrator | Saturday 13 September 2025 00:49:06 +0000 (0:00:00.660) 0:02:37.919 **** 2025-09-13 00:50:11.131082 | orchestrator | ok: [testbed-manager] 2025-09-13 00:50:11.131092 | orchestrator | 2025-09-13 00:50:11.131101 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-09-13 00:50:11.131111 | orchestrator | Saturday 13 September 2025 00:49:07 +0000 (0:00:00.162) 0:02:38.082 **** 2025-09-13 00:50:11.131163 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-09-13 00:50:11.131173 | orchestrator | 2025-09-13 00:50:11.131182 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-09-13 00:50:11.131192 | orchestrator | Saturday 13 September 2025 00:49:07 +0000 (0:00:00.230) 0:02:38.312 **** 2025-09-13 00:50:11.131202 | orchestrator | ok: [testbed-manager] 2025-09-13 00:50:11.131211 | orchestrator | 2025-09-13 00:50:11.131221 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-09-13 00:50:11.131231 | orchestrator | Saturday 13 September 2025 00:49:08 +0000 (0:00:00.865) 0:02:39.177 **** 2025-09-13 00:50:11.131240 | orchestrator | ok: [testbed-manager] 2025-09-13 00:50:11.131250 | orchestrator | 2025-09-13 00:50:11.131259 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-09-13 00:50:11.131269 | orchestrator | Saturday 13 September 2025 00:49:09 +0000 (0:00:01.750) 0:02:40.927 **** 2025-09-13 00:50:11.131278 | orchestrator | changed: [testbed-manager] 2025-09-13 00:50:11.131288 | orchestrator | 2025-09-13 00:50:11.131297 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-09-13 00:50:11.131307 | orchestrator | Saturday 13 September 2025 00:49:10 +0000 (0:00:00.847) 0:02:41.774 **** 2025-09-13 00:50:11.131317 | orchestrator | ok: [testbed-manager] 2025-09-13 00:50:11.131326 | orchestrator | 2025-09-13 00:50:11.131336 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-09-13 00:50:11.131346 | orchestrator | Saturday 13 September 2025 00:49:11 +0000 (0:00:00.457) 0:02:42.232 **** 2025-09-13 00:50:11.131355 | orchestrator | changed: [testbed-manager] 2025-09-13 00:50:11.131365 | orchestrator | 2025-09-13 00:50:11.131374 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-09-13 00:50:11.131384 | orchestrator | Saturday 13 September 2025 00:49:19 +0000 (0:00:07.809) 0:02:50.041 **** 2025-09-13 00:50:11.131393 | orchestrator | changed: [testbed-manager] 2025-09-13 00:50:11.131403 | orchestrator | 2025-09-13 00:50:11.131413 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-09-13 00:50:11.131422 | orchestrator | Saturday 13 September 2025 00:49:35 +0000 (0:00:16.680) 0:03:06.722 **** 2025-09-13 00:50:11.131432 | orchestrator | ok: [testbed-manager] 2025-09-13 00:50:11.131447 | orchestrator | 2025-09-13 00:50:11.131457 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-09-13 00:50:11.131467 | orchestrator | 2025-09-13 00:50:11.131476 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-09-13 00:50:11.131486 | orchestrator | Saturday 13 September 2025 00:49:36 +0000 (0:00:00.589) 0:03:07.311 **** 2025-09-13 00:50:11.131495 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:50:11.131505 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:50:11.131515 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:50:11.131524 | orchestrator | 2025-09-13 00:50:11.131534 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-09-13 00:50:11.131544 | orchestrator | Saturday 13 September 2025 00:49:36 +0000 (0:00:00.473) 0:03:07.785 **** 2025-09-13 00:50:11.131553 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.131563 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:50:11.131573 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:50:11.131582 | orchestrator | 2025-09-13 00:50:11.131597 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-09-13 00:50:11.131607 | orchestrator | Saturday 13 September 2025 00:49:37 +0000 (0:00:00.453) 0:03:08.238 **** 2025-09-13 00:50:11.131617 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:50:11.131626 | orchestrator | 2025-09-13 00:50:11.131636 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-09-13 00:50:11.131645 | orchestrator | Saturday 13 September 2025 00:49:38 +0000 (0:00:00.920) 0:03:09.159 **** 2025-09-13 00:50:11.131655 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.131665 | orchestrator | 2025-09-13 00:50:11.131674 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-09-13 00:50:11.131684 | orchestrator | Saturday 13 September 2025 00:49:38 +0000 (0:00:00.259) 0:03:09.418 **** 2025-09-13 00:50:11.131693 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.131703 | orchestrator | 2025-09-13 00:50:11.131713 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-09-13 00:50:11.131722 | orchestrator | Saturday 13 September 2025 00:49:38 +0000 (0:00:00.265) 0:03:09.684 **** 2025-09-13 00:50:11.131732 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.131741 | orchestrator | 2025-09-13 00:50:11.131751 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-09-13 00:50:11.131760 | orchestrator | Saturday 13 September 2025 00:49:38 +0000 (0:00:00.195) 0:03:09.880 **** 2025-09-13 00:50:11.131770 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.131780 | orchestrator | 2025-09-13 00:50:11.131789 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-09-13 00:50:11.131799 | orchestrator | Saturday 13 September 2025 00:49:39 +0000 (0:00:00.227) 0:03:10.107 **** 2025-09-13 00:50:11.131808 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.131818 | orchestrator | 2025-09-13 00:50:11.131828 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-09-13 00:50:11.131837 | orchestrator | Saturday 13 September 2025 00:49:39 +0000 (0:00:00.193) 0:03:10.301 **** 2025-09-13 00:50:11.131847 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.131856 | orchestrator | 2025-09-13 00:50:11.131866 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-09-13 00:50:11.131879 | orchestrator | Saturday 13 September 2025 00:49:39 +0000 (0:00:00.200) 0:03:10.501 **** 2025-09-13 00:50:11.131889 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.131899 | orchestrator | 2025-09-13 00:50:11.131909 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-09-13 00:50:11.131918 | orchestrator | Saturday 13 September 2025 00:49:39 +0000 (0:00:00.327) 0:03:10.828 **** 2025-09-13 00:50:11.131928 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.131937 | orchestrator | 2025-09-13 00:50:11.131947 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-09-13 00:50:11.131962 | orchestrator | Saturday 13 September 2025 00:49:40 +0000 (0:00:00.254) 0:03:11.083 **** 2025-09-13 00:50:11.131972 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.131981 | orchestrator | 2025-09-13 00:50:11.131991 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-09-13 00:50:11.132001 | orchestrator | Saturday 13 September 2025 00:49:40 +0000 (0:00:00.226) 0:03:11.309 **** 2025-09-13 00:50:11.132010 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-09-13 00:50:11.132020 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-09-13 00:50:11.132029 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.132039 | orchestrator | 2025-09-13 00:50:11.132048 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-09-13 00:50:11.132058 | orchestrator | Saturday 13 September 2025 00:49:41 +0000 (0:00:00.908) 0:03:12.218 **** 2025-09-13 00:50:11.132067 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.132077 | orchestrator | 2025-09-13 00:50:11.132086 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-09-13 00:50:11.132096 | orchestrator | Saturday 13 September 2025 00:49:41 +0000 (0:00:00.227) 0:03:12.445 **** 2025-09-13 00:50:11.132105 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.132129 | orchestrator | 2025-09-13 00:50:11.132139 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-09-13 00:50:11.132149 | orchestrator | Saturday 13 September 2025 00:49:41 +0000 (0:00:00.209) 0:03:12.655 **** 2025-09-13 00:50:11.132158 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.132168 | orchestrator | 2025-09-13 00:50:11.132178 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-09-13 00:50:11.132187 | orchestrator | Saturday 13 September 2025 00:49:41 +0000 (0:00:00.212) 0:03:12.868 **** 2025-09-13 00:50:11.132197 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.132206 | orchestrator | 2025-09-13 00:50:11.132216 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-09-13 00:50:11.132226 | orchestrator | Saturday 13 September 2025 00:49:42 +0000 (0:00:00.233) 0:03:13.101 **** 2025-09-13 00:50:11.132235 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.132245 | orchestrator | 2025-09-13 00:50:11.132255 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-09-13 00:50:11.132264 | orchestrator | Saturday 13 September 2025 00:49:42 +0000 (0:00:00.202) 0:03:13.303 **** 2025-09-13 00:50:11.132274 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.132283 | orchestrator | 2025-09-13 00:50:11.132293 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-09-13 00:50:11.132303 | orchestrator | Saturday 13 September 2025 00:49:42 +0000 (0:00:00.194) 0:03:13.497 **** 2025-09-13 00:50:11.132312 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.132322 | orchestrator | 2025-09-13 00:50:11.132331 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-09-13 00:50:11.132341 | orchestrator | Saturday 13 September 2025 00:49:42 +0000 (0:00:00.197) 0:03:13.695 **** 2025-09-13 00:50:11.132350 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.132360 | orchestrator | 2025-09-13 00:50:11.132370 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-09-13 00:50:11.132384 | orchestrator | Saturday 13 September 2025 00:49:42 +0000 (0:00:00.264) 0:03:13.959 **** 2025-09-13 00:50:11.132394 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.132404 | orchestrator | 2025-09-13 00:50:11.132414 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-09-13 00:50:11.132423 | orchestrator | Saturday 13 September 2025 00:49:43 +0000 (0:00:00.217) 0:03:14.177 **** 2025-09-13 00:50:11.132433 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.132442 | orchestrator | 2025-09-13 00:50:11.132452 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-09-13 00:50:11.132461 | orchestrator | Saturday 13 September 2025 00:49:43 +0000 (0:00:00.222) 0:03:14.399 **** 2025-09-13 00:50:11.132483 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.132492 | orchestrator | 2025-09-13 00:50:11.132502 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-09-13 00:50:11.132512 | orchestrator | Saturday 13 September 2025 00:49:43 +0000 (0:00:00.184) 0:03:14.583 **** 2025-09-13 00:50:11.132521 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-09-13 00:50:11.132531 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-09-13 00:50:11.132541 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-09-13 00:50:11.132550 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-09-13 00:50:11.132560 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.132569 | orchestrator | 2025-09-13 00:50:11.132579 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-09-13 00:50:11.132588 | orchestrator | Saturday 13 September 2025 00:49:44 +0000 (0:00:00.954) 0:03:15.538 **** 2025-09-13 00:50:11.132598 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.132608 | orchestrator | 2025-09-13 00:50:11.132617 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-09-13 00:50:11.132627 | orchestrator | Saturday 13 September 2025 00:49:44 +0000 (0:00:00.210) 0:03:15.748 **** 2025-09-13 00:50:11.132636 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.132646 | orchestrator | 2025-09-13 00:50:11.132655 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-09-13 00:50:11.132665 | orchestrator | Saturday 13 September 2025 00:49:44 +0000 (0:00:00.239) 0:03:15.988 **** 2025-09-13 00:50:11.132679 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.132689 | orchestrator | 2025-09-13 00:50:11.132698 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-09-13 00:50:11.132708 | orchestrator | Saturday 13 September 2025 00:49:45 +0000 (0:00:00.203) 0:03:16.192 **** 2025-09-13 00:50:11.132718 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.132727 | orchestrator | 2025-09-13 00:50:11.132737 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-09-13 00:50:11.132747 | orchestrator | Saturday 13 September 2025 00:49:45 +0000 (0:00:00.229) 0:03:16.422 **** 2025-09-13 00:50:11.132756 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-09-13 00:50:11.132766 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-09-13 00:50:11.132776 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.132785 | orchestrator | 2025-09-13 00:50:11.132795 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-09-13 00:50:11.132804 | orchestrator | Saturday 13 September 2025 00:49:45 +0000 (0:00:00.300) 0:03:16.722 **** 2025-09-13 00:50:11.132814 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.132824 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:50:11.132833 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:50:11.132843 | orchestrator | 2025-09-13 00:50:11.132852 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-09-13 00:50:11.132862 | orchestrator | Saturday 13 September 2025 00:49:46 +0000 (0:00:00.324) 0:03:17.046 **** 2025-09-13 00:50:11.132872 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:50:11.132881 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:50:11.132891 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:50:11.132901 | orchestrator | 2025-09-13 00:50:11.132910 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-09-13 00:50:11.132920 | orchestrator | 2025-09-13 00:50:11.132930 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-09-13 00:50:11.132939 | orchestrator | Saturday 13 September 2025 00:49:47 +0000 (0:00:01.164) 0:03:18.210 **** 2025-09-13 00:50:11.132949 | orchestrator | ok: [testbed-manager] 2025-09-13 00:50:11.132958 | orchestrator | 2025-09-13 00:50:11.132968 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-09-13 00:50:11.132984 | orchestrator | Saturday 13 September 2025 00:49:47 +0000 (0:00:00.150) 0:03:18.361 **** 2025-09-13 00:50:11.132994 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-09-13 00:50:11.133003 | orchestrator | 2025-09-13 00:50:11.133013 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-09-13 00:50:11.133022 | orchestrator | Saturday 13 September 2025 00:49:47 +0000 (0:00:00.295) 0:03:18.656 **** 2025-09-13 00:50:11.133032 | orchestrator | changed: [testbed-manager] 2025-09-13 00:50:11.133041 | orchestrator | 2025-09-13 00:50:11.133051 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-09-13 00:50:11.133060 | orchestrator | 2025-09-13 00:50:11.133070 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-09-13 00:50:11.133080 | orchestrator | Saturday 13 September 2025 00:49:53 +0000 (0:00:05.921) 0:03:24.578 **** 2025-09-13 00:50:11.133089 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:50:11.133099 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:50:11.133108 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:50:11.133154 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:50:11.133164 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:50:11.133174 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:50:11.133183 | orchestrator | 2025-09-13 00:50:11.133193 | orchestrator | TASK [Manage labels] *********************************************************** 2025-09-13 00:50:11.133203 | orchestrator | Saturday 13 September 2025 00:49:54 +0000 (0:00:00.859) 0:03:25.437 **** 2025-09-13 00:50:11.133218 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-13 00:50:11.133228 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-13 00:50:11.133237 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-13 00:50:11.133247 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-13 00:50:11.133257 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-13 00:50:11.133266 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-13 00:50:11.133276 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-13 00:50:11.133286 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-13 00:50:11.133295 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-13 00:50:11.133305 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-13 00:50:11.133314 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-13 00:50:11.133324 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-13 00:50:11.133333 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-13 00:50:11.133343 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-13 00:50:11.133353 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-13 00:50:11.133362 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-13 00:50:11.133372 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-13 00:50:11.133382 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-13 00:50:11.133391 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-13 00:50:11.133401 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-13 00:50:11.133411 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-13 00:50:11.133426 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-13 00:50:11.133436 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-13 00:50:11.133445 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-13 00:50:11.133461 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-13 00:50:11.133471 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-13 00:50:11.133481 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-13 00:50:11.133490 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-13 00:50:11.133498 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-13 00:50:11.133506 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-13 00:50:11.133514 | orchestrator | 2025-09-13 00:50:11.133522 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-09-13 00:50:11.133529 | orchestrator | Saturday 13 September 2025 00:50:08 +0000 (0:00:14.139) 0:03:39.577 **** 2025-09-13 00:50:11.133537 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:50:11.133545 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:50:11.133553 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:50:11.133560 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.133568 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:50:11.133576 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:50:11.133584 | orchestrator | 2025-09-13 00:50:11.133592 | orchestrator | TASK [Manage taints] *********************************************************** 2025-09-13 00:50:11.133600 | orchestrator | Saturday 13 September 2025 00:50:09 +0000 (0:00:00.581) 0:03:40.158 **** 2025-09-13 00:50:11.133607 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:50:11.133615 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:50:11.133623 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:50:11.133631 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:11.133638 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:50:11.133646 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:50:11.133654 | orchestrator | 2025-09-13 00:50:11.133662 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:50:11.133670 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:50:11.133679 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-09-13 00:50:11.133687 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-13 00:50:11.133699 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-13 00:50:11.133707 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-13 00:50:11.133715 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-13 00:50:11.133723 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-13 00:50:11.133731 | orchestrator | 2025-09-13 00:50:11.133739 | orchestrator | 2025-09-13 00:50:11.133746 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:50:11.133754 | orchestrator | Saturday 13 September 2025 00:50:09 +0000 (0:00:00.388) 0:03:40.547 **** 2025-09-13 00:50:11.133767 | orchestrator | =============================================================================== 2025-09-13 00:50:11.133775 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 44.69s 2025-09-13 00:50:11.133783 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 22.93s 2025-09-13 00:50:11.133791 | orchestrator | kubectl : Install required packages ------------------------------------ 16.68s 2025-09-13 00:50:11.133799 | orchestrator | Manage labels ---------------------------------------------------------- 14.14s 2025-09-13 00:50:11.133806 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 11.94s 2025-09-13 00:50:11.133814 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.81s 2025-09-13 00:50:11.133822 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.67s 2025-09-13 00:50:11.133830 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.92s 2025-09-13 00:50:11.133841 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 3.47s 2025-09-13 00:50:11.133849 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 3.08s 2025-09-13 00:50:11.133857 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.06s 2025-09-13 00:50:11.133865 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 3.04s 2025-09-13 00:50:11.133872 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.61s 2025-09-13 00:50:11.133880 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 2.19s 2025-09-13 00:50:11.133888 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.05s 2025-09-13 00:50:11.133896 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 2.01s 2025-09-13 00:50:11.133904 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.92s 2025-09-13 00:50:11.133911 | orchestrator | k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry --- 1.86s 2025-09-13 00:50:11.133919 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 1.85s 2025-09-13 00:50:11.133927 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 1.82s 2025-09-13 00:50:11.133935 | orchestrator | 2025-09-13 00:50:11 | INFO  | Task df3bcc9d-f941-4d95-83a5-1fffbd27f62d is in state STARTED 2025-09-13 00:50:11.133943 | orchestrator | 2025-09-13 00:50:11 | INFO  | Task b1367a98-9847-4333-8a2d-e2ca637a9d21 is in state STARTED 2025-09-13 00:50:11.133951 | orchestrator | 2025-09-13 00:50:11 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:50:11.133959 | orchestrator | 2025-09-13 00:50:11 | INFO  | Task 86e95312-b5d9-4fb5-b397-4f5a031f1feb is in state STARTED 2025-09-13 00:50:11.133966 | orchestrator | 2025-09-13 00:50:11 | INFO  | Task 7f341164-a2c5-4793-9e4a-6679f3a8ec9e is in state SUCCESS 2025-09-13 00:50:11.133974 | orchestrator | 2025-09-13 00:50:11 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:50:11.133982 | orchestrator | 2025-09-13 00:50:11 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:50:11.133990 | orchestrator | 2025-09-13 00:50:11 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:50:14.203425 | orchestrator | 2025-09-13 00:50:14 | INFO  | Task df3bcc9d-f941-4d95-83a5-1fffbd27f62d is in state STARTED 2025-09-13 00:50:14.205629 | orchestrator | 2025-09-13 00:50:14 | INFO  | Task b1367a98-9847-4333-8a2d-e2ca637a9d21 is in state STARTED 2025-09-13 00:50:14.205973 | orchestrator | 2025-09-13 00:50:14 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:50:14.206941 | orchestrator | 2025-09-13 00:50:14 | INFO  | Task 86e95312-b5d9-4fb5-b397-4f5a031f1feb is in state STARTED 2025-09-13 00:50:14.208826 | orchestrator | 2025-09-13 00:50:14 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:50:14.209499 | orchestrator | 2025-09-13 00:50:14 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:50:14.209615 | orchestrator | 2025-09-13 00:50:14 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:50:17.290944 | orchestrator | 2025-09-13 00:50:17 | INFO  | Task df3bcc9d-f941-4d95-83a5-1fffbd27f62d is in state STARTED 2025-09-13 00:50:17.291035 | orchestrator | 2025-09-13 00:50:17 | INFO  | Task b1367a98-9847-4333-8a2d-e2ca637a9d21 is in state STARTED 2025-09-13 00:50:17.291050 | orchestrator | 2025-09-13 00:50:17 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:50:17.291063 | orchestrator | 2025-09-13 00:50:17 | INFO  | Task 86e95312-b5d9-4fb5-b397-4f5a031f1feb is in state SUCCESS 2025-09-13 00:50:17.291074 | orchestrator | 2025-09-13 00:50:17 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:50:17.291087 | orchestrator | 2025-09-13 00:50:17 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:50:17.291099 | orchestrator | 2025-09-13 00:50:17 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:50:20.314828 | orchestrator | 2025-09-13 00:50:20 | INFO  | Task df3bcc9d-f941-4d95-83a5-1fffbd27f62d is in state STARTED 2025-09-13 00:50:20.314924 | orchestrator | 2025-09-13 00:50:20 | INFO  | Task b1367a98-9847-4333-8a2d-e2ca637a9d21 is in state STARTED 2025-09-13 00:50:20.315950 | orchestrator | 2025-09-13 00:50:20 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:50:20.318183 | orchestrator | 2025-09-13 00:50:20 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:50:20.319092 | orchestrator | 2025-09-13 00:50:20 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:50:20.319117 | orchestrator | 2025-09-13 00:50:20 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:50:23.362320 | orchestrator | 2025-09-13 00:50:23 | INFO  | Task df3bcc9d-f941-4d95-83a5-1fffbd27f62d is in state STARTED 2025-09-13 00:50:23.362421 | orchestrator | 2025-09-13 00:50:23 | INFO  | Task b1367a98-9847-4333-8a2d-e2ca637a9d21 is in state SUCCESS 2025-09-13 00:50:23.363189 | orchestrator | 2025-09-13 00:50:23 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:50:23.364045 | orchestrator | 2025-09-13 00:50:23 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:50:23.365205 | orchestrator | 2025-09-13 00:50:23 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:50:23.365226 | orchestrator | 2025-09-13 00:50:23 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:50:26.392650 | orchestrator | 2025-09-13 00:50:26 | INFO  | Task df3bcc9d-f941-4d95-83a5-1fffbd27f62d is in state STARTED 2025-09-13 00:50:26.395555 | orchestrator | 2025-09-13 00:50:26 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:50:26.395595 | orchestrator | 2025-09-13 00:50:26 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:50:26.395608 | orchestrator | 2025-09-13 00:50:26 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:50:26.395620 | orchestrator | 2025-09-13 00:50:26 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:50:29.447374 | orchestrator | 2025-09-13 00:50:29 | INFO  | Task df3bcc9d-f941-4d95-83a5-1fffbd27f62d is in state STARTED 2025-09-13 00:50:29.448890 | orchestrator | 2025-09-13 00:50:29 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:50:29.450554 | orchestrator | 2025-09-13 00:50:29 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:50:29.452446 | orchestrator | 2025-09-13 00:50:29 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:50:29.452536 | orchestrator | 2025-09-13 00:50:29 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:50:32.488056 | orchestrator | 2025-09-13 00:50:32 | INFO  | Task df3bcc9d-f941-4d95-83a5-1fffbd27f62d is in state STARTED 2025-09-13 00:50:32.488192 | orchestrator | 2025-09-13 00:50:32 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:50:32.491049 | orchestrator | 2025-09-13 00:50:32 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:50:32.491963 | orchestrator | 2025-09-13 00:50:32 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:50:32.491990 | orchestrator | 2025-09-13 00:50:32 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:50:35.523067 | orchestrator | 2025-09-13 00:50:35 | INFO  | Task df3bcc9d-f941-4d95-83a5-1fffbd27f62d is in state STARTED 2025-09-13 00:50:35.523845 | orchestrator | 2025-09-13 00:50:35 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:50:35.525443 | orchestrator | 2025-09-13 00:50:35 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:50:35.526428 | orchestrator | 2025-09-13 00:50:35 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:50:35.528549 | orchestrator | 2025-09-13 00:50:35 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:50:38.554268 | orchestrator | 2025-09-13 00:50:38.554379 | orchestrator | 2025-09-13 00:50:38.554395 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-09-13 00:50:38.554407 | orchestrator | 2025-09-13 00:50:38.554419 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-13 00:50:38.554430 | orchestrator | Saturday 13 September 2025 00:50:14 +0000 (0:00:00.190) 0:00:00.190 **** 2025-09-13 00:50:38.554441 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-13 00:50:38.554452 | orchestrator | 2025-09-13 00:50:38.554463 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-13 00:50:38.554474 | orchestrator | Saturday 13 September 2025 00:50:14 +0000 (0:00:00.825) 0:00:01.016 **** 2025-09-13 00:50:38.554485 | orchestrator | changed: [testbed-manager] 2025-09-13 00:50:38.554496 | orchestrator | 2025-09-13 00:50:38.554507 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-09-13 00:50:38.554517 | orchestrator | Saturday 13 September 2025 00:50:16 +0000 (0:00:01.363) 0:00:02.382 **** 2025-09-13 00:50:38.554528 | orchestrator | changed: [testbed-manager] 2025-09-13 00:50:38.554539 | orchestrator | 2025-09-13 00:50:38.554549 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:50:38.554587 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:50:38.554600 | orchestrator | 2025-09-13 00:50:38.554611 | orchestrator | 2025-09-13 00:50:38.554622 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:50:38.554633 | orchestrator | Saturday 13 September 2025 00:50:16 +0000 (0:00:00.459) 0:00:02.842 **** 2025-09-13 00:50:38.554644 | orchestrator | =============================================================================== 2025-09-13 00:50:38.554654 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.37s 2025-09-13 00:50:38.554665 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.83s 2025-09-13 00:50:38.554701 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.46s 2025-09-13 00:50:38.554713 | orchestrator | 2025-09-13 00:50:38.554723 | orchestrator | 2025-09-13 00:50:38.554734 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-13 00:50:38.554745 | orchestrator | 2025-09-13 00:50:38.554756 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-13 00:50:38.554767 | orchestrator | Saturday 13 September 2025 00:50:13 +0000 (0:00:00.165) 0:00:00.165 **** 2025-09-13 00:50:38.554778 | orchestrator | ok: [testbed-manager] 2025-09-13 00:50:38.554789 | orchestrator | 2025-09-13 00:50:38.554800 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-13 00:50:38.554810 | orchestrator | Saturday 13 September 2025 00:50:14 +0000 (0:00:00.562) 0:00:00.728 **** 2025-09-13 00:50:38.554821 | orchestrator | ok: [testbed-manager] 2025-09-13 00:50:38.554832 | orchestrator | 2025-09-13 00:50:38.554843 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-13 00:50:38.554854 | orchestrator | Saturday 13 September 2025 00:50:14 +0000 (0:00:00.454) 0:00:01.183 **** 2025-09-13 00:50:38.554864 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-13 00:50:38.554875 | orchestrator | 2025-09-13 00:50:38.554886 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-13 00:50:38.554897 | orchestrator | Saturday 13 September 2025 00:50:15 +0000 (0:00:00.869) 0:00:02.052 **** 2025-09-13 00:50:38.554907 | orchestrator | changed: [testbed-manager] 2025-09-13 00:50:38.554918 | orchestrator | 2025-09-13 00:50:38.554930 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-13 00:50:38.554940 | orchestrator | Saturday 13 September 2025 00:50:17 +0000 (0:00:01.287) 0:00:03.340 **** 2025-09-13 00:50:38.554951 | orchestrator | changed: [testbed-manager] 2025-09-13 00:50:38.554962 | orchestrator | 2025-09-13 00:50:38.554973 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-13 00:50:38.554984 | orchestrator | Saturday 13 September 2025 00:50:18 +0000 (0:00:00.930) 0:00:04.271 **** 2025-09-13 00:50:38.554995 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-13 00:50:38.555005 | orchestrator | 2025-09-13 00:50:38.555016 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-13 00:50:38.555027 | orchestrator | Saturday 13 September 2025 00:50:19 +0000 (0:00:01.768) 0:00:06.040 **** 2025-09-13 00:50:38.555038 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-13 00:50:38.555049 | orchestrator | 2025-09-13 00:50:38.555059 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-13 00:50:38.555070 | orchestrator | Saturday 13 September 2025 00:50:20 +0000 (0:00:00.833) 0:00:06.873 **** 2025-09-13 00:50:38.555081 | orchestrator | ok: [testbed-manager] 2025-09-13 00:50:38.555091 | orchestrator | 2025-09-13 00:50:38.555102 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-13 00:50:38.555113 | orchestrator | Saturday 13 September 2025 00:50:21 +0000 (0:00:00.446) 0:00:07.320 **** 2025-09-13 00:50:38.555124 | orchestrator | ok: [testbed-manager] 2025-09-13 00:50:38.555155 | orchestrator | 2025-09-13 00:50:38.555166 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:50:38.555177 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:50:38.555188 | orchestrator | 2025-09-13 00:50:38.555199 | orchestrator | 2025-09-13 00:50:38.555210 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:50:38.555221 | orchestrator | Saturday 13 September 2025 00:50:21 +0000 (0:00:00.375) 0:00:07.695 **** 2025-09-13 00:50:38.555232 | orchestrator | =============================================================================== 2025-09-13 00:50:38.555242 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.77s 2025-09-13 00:50:38.555253 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.29s 2025-09-13 00:50:38.555272 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.93s 2025-09-13 00:50:38.555299 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.87s 2025-09-13 00:50:38.555310 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.83s 2025-09-13 00:50:38.555321 | orchestrator | Get home directory of operator user ------------------------------------- 0.56s 2025-09-13 00:50:38.555332 | orchestrator | Create .kube directory -------------------------------------------------- 0.45s 2025-09-13 00:50:38.555343 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.45s 2025-09-13 00:50:38.555354 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.38s 2025-09-13 00:50:38.555365 | orchestrator | 2025-09-13 00:50:38.555376 | orchestrator | 2025-09-13 00:50:38.555386 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-13 00:50:38.555397 | orchestrator | 2025-09-13 00:50:38.555408 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-13 00:50:38.555418 | orchestrator | Saturday 13 September 2025 00:49:18 +0000 (0:00:00.698) 0:00:00.698 **** 2025-09-13 00:50:38.555429 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:50:38.555440 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:50:38.555451 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:50:38.555462 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:50:38.555473 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:50:38.555483 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:50:38.555494 | orchestrator | 2025-09-13 00:50:38.555510 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-13 00:50:38.555522 | orchestrator | Saturday 13 September 2025 00:49:19 +0000 (0:00:01.188) 0:00:01.887 **** 2025-09-13 00:50:38.555533 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-13 00:50:38.555544 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-13 00:50:38.555555 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-13 00:50:38.555566 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-13 00:50:38.555577 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-13 00:50:38.555588 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-13 00:50:38.555599 | orchestrator | 2025-09-13 00:50:38.555610 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-09-13 00:50:38.555621 | orchestrator | 2025-09-13 00:50:38.555632 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-09-13 00:50:38.555642 | orchestrator | Saturday 13 September 2025 00:49:20 +0000 (0:00:01.536) 0:00:03.423 **** 2025-09-13 00:50:38.555654 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:50:38.555667 | orchestrator | 2025-09-13 00:50:38.555678 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-13 00:50:38.555689 | orchestrator | Saturday 13 September 2025 00:49:24 +0000 (0:00:03.240) 0:00:06.664 **** 2025-09-13 00:50:38.555699 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-13 00:50:38.555711 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-13 00:50:38.555721 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-13 00:50:38.555732 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-13 00:50:38.555743 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-13 00:50:38.555754 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-13 00:50:38.555765 | orchestrator | 2025-09-13 00:50:38.555775 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-13 00:50:38.555793 | orchestrator | Saturday 13 September 2025 00:49:26 +0000 (0:00:01.969) 0:00:08.634 **** 2025-09-13 00:50:38.555804 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-13 00:50:38.555815 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-13 00:50:38.555826 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-13 00:50:38.555837 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-13 00:50:38.555847 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-13 00:50:38.555858 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-13 00:50:38.555869 | orchestrator | 2025-09-13 00:50:38.555880 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-13 00:50:38.555891 | orchestrator | Saturday 13 September 2025 00:49:28 +0000 (0:00:02.445) 0:00:11.079 **** 2025-09-13 00:50:38.555902 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-09-13 00:50:38.555912 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:38.555923 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-09-13 00:50:38.555934 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:50:38.555944 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-09-13 00:50:38.555955 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:50:38.555966 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-09-13 00:50:38.555976 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:50:38.555987 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-09-13 00:50:38.555998 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:50:38.556008 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-09-13 00:50:38.556019 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:50:38.556030 | orchestrator | 2025-09-13 00:50:38.556040 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-09-13 00:50:38.556051 | orchestrator | Saturday 13 September 2025 00:49:30 +0000 (0:00:02.156) 0:00:13.235 **** 2025-09-13 00:50:38.556062 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:38.556073 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:50:38.556084 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:50:38.556103 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:50:38.556114 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:50:38.556125 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:50:38.556152 | orchestrator | 2025-09-13 00:50:38.556164 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-09-13 00:50:38.556175 | orchestrator | Saturday 13 September 2025 00:49:31 +0000 (0:00:00.876) 0:00:14.111 **** 2025-09-13 00:50:38.556195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-13 00:50:38.556215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-13 00:50:38.556234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-13 00:50:38.556246 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-13 00:50:38.556258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-13 00:50:38.556291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-13 00:50:38.556308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-13 00:50:38.556320 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-13 00:50:38.556338 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-13 00:50:38.556350 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-13 00:50:38.556361 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-13 00:50:38.556381 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-13 00:50:38.556393 | orchestrator | 2025-09-13 00:50:38.556405 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-09-13 00:50:38.556416 | orchestrator | Saturday 13 September 2025 00:49:34 +0000 (0:00:02.548) 0:00:16.659 **** 2025-09-13 00:50:38.556432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-13 00:50:38.556451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-13 00:50:38.556463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-13 00:50:38.556474 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-13 00:50:38.556492 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-13 00:50:38.556503 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-13 00:50:38.556519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-13 00:50:38.556537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-13 00:50:38.556549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-13 00:50:38.556560 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-13 00:50:38.556577 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-13 00:50:38.556589 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-13 00:50:38.556607 | orchestrator | 2025-09-13 00:50:38.556628 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-09-13 00:50:38.556639 | orchestrator | Saturday 13 September 2025 00:49:37 +0000 (0:00:03.181) 0:00:19.841 **** 2025-09-13 00:50:38.556651 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:50:38.556662 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:50:38.556672 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:50:38.556683 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:50:38.556694 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:50:38.556705 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:50:38.556716 | orchestrator | 2025-09-13 00:50:38.556727 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-09-13 00:50:38.556738 | orchestrator | Saturday 13 September 2025 00:49:39 +0000 (0:00:02.599) 0:00:22.440 **** 2025-09-13 00:50:38.556749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-13 00:50:38.556761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-13 00:50:38.556772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-13 00:50:38.556789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-13 00:50:38.556801 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-13 00:50:38.556824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-13 00:50:38.556836 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-13 00:50:38.556847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-13 00:50:38.556858 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-13 00:50:38.556877 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/tim2025-09-13 00:50:38 | INFO  | Task df3bcc9d-f941-4d95-83a5-1fffbd27f62d is in state SUCCESS 2025-09-13 00:50:38.556890 | orchestrator | ezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-13 00:50:38.556913 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-13 00:50:38.556926 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-13 00:50:38.556937 | orchestrator | 2025-09-13 00:50:38.556948 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-13 00:50:38.556959 | orchestrator | Saturday 13 September 2025 00:49:43 +0000 (0:00:03.325) 0:00:25.765 **** 2025-09-13 00:50:38.556970 | orchestrator | 2025-09-13 00:50:38.556981 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-13 00:50:38.556992 | orchestrator | Saturday 13 September 2025 00:49:44 +0000 (0:00:01.033) 0:00:26.799 **** 2025-09-13 00:50:38.557003 | orchestrator | 2025-09-13 00:50:38.557014 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-13 00:50:38.557025 | orchestrator | Saturday 13 September 2025 00:49:44 +0000 (0:00:00.135) 0:00:26.934 **** 2025-09-13 00:50:38.557035 | orchestrator | 2025-09-13 00:50:38.557046 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-13 00:50:38.557057 | orchestrator | Saturday 13 September 2025 00:49:44 +0000 (0:00:00.136) 0:00:27.071 **** 2025-09-13 00:50:38.557068 | orchestrator | 2025-09-13 00:50:38.557079 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-13 00:50:38.557090 | orchestrator | Saturday 13 September 2025 00:49:44 +0000 (0:00:00.139) 0:00:27.210 **** 2025-09-13 00:50:38.557100 | orchestrator | 2025-09-13 00:50:38.557111 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-13 00:50:38.557122 | orchestrator | Saturday 13 September 2025 00:49:44 +0000 (0:00:00.138) 0:00:27.349 **** 2025-09-13 00:50:38.557177 | orchestrator | 2025-09-13 00:50:38.557190 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-09-13 00:50:38.557201 | orchestrator | Saturday 13 September 2025 00:49:45 +0000 (0:00:00.155) 0:00:27.505 **** 2025-09-13 00:50:38.557212 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:50:38.557222 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:50:38.557233 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:50:38.557244 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:50:38.557255 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:50:38.557266 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:50:38.557276 | orchestrator | 2025-09-13 00:50:38.557287 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-09-13 00:50:38.557298 | orchestrator | Saturday 13 September 2025 00:49:56 +0000 (0:00:11.595) 0:00:39.100 **** 2025-09-13 00:50:38.557309 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:50:38.557327 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:50:38.557338 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:50:38.557349 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:50:38.557359 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:50:38.557370 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:50:38.557381 | orchestrator | 2025-09-13 00:50:38.557392 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-13 00:50:38.557403 | orchestrator | Saturday 13 September 2025 00:49:58 +0000 (0:00:02.291) 0:00:41.392 **** 2025-09-13 00:50:38.557413 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:50:38.557425 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:50:38.557436 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:50:38.557447 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:50:38.557458 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:50:38.557469 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:50:38.557479 | orchestrator | 2025-09-13 00:50:38.557490 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-09-13 00:50:38.557501 | orchestrator | Saturday 13 September 2025 00:50:12 +0000 (0:00:13.122) 0:00:54.515 **** 2025-09-13 00:50:38.557520 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-09-13 00:50:38.557531 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-09-13 00:50:38.557542 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-09-13 00:50:38.557553 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-09-13 00:50:38.557564 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-09-13 00:50:38.557575 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-09-13 00:50:38.557585 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-09-13 00:50:38.557596 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-09-13 00:50:38.557608 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-09-13 00:50:38.557618 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-09-13 00:50:38.557629 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-09-13 00:50:38.557640 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-09-13 00:50:38.558422 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-13 00:50:38.558449 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-13 00:50:38.558460 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-13 00:50:38.558471 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-13 00:50:38.558482 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-13 00:50:38.558493 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-13 00:50:38.558504 | orchestrator | 2025-09-13 00:50:38.558515 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-09-13 00:50:38.558527 | orchestrator | Saturday 13 September 2025 00:50:20 +0000 (0:00:08.261) 0:01:02.777 **** 2025-09-13 00:50:38.558547 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-09-13 00:50:38.558559 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:50:38.558570 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-09-13 00:50:38.558580 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:50:38.558591 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-09-13 00:50:38.558602 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-09-13 00:50:38.558613 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:50:38.558623 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-09-13 00:50:38.558639 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-09-13 00:50:38.558650 | orchestrator | 2025-09-13 00:50:38.558661 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-09-13 00:50:38.558672 | orchestrator | Saturday 13 September 2025 00:50:23 +0000 (0:00:02.738) 0:01:05.515 **** 2025-09-13 00:50:38.558683 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-09-13 00:50:38.558694 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:50:38.558704 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-09-13 00:50:38.558715 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:50:38.558726 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-09-13 00:50:38.558736 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:50:38.558747 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-09-13 00:50:38.558758 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-09-13 00:50:38.558769 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-09-13 00:50:38.558780 | orchestrator | 2025-09-13 00:50:38.558790 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-13 00:50:38.558801 | orchestrator | Saturday 13 September 2025 00:50:26 +0000 (0:00:03.728) 0:01:09.244 **** 2025-09-13 00:50:38.558812 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:50:38.558823 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:50:38.558834 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:50:38.558845 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:50:38.558855 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:50:38.558866 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:50:38.558877 | orchestrator | 2025-09-13 00:50:38.558888 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:50:38.558899 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-13 00:50:38.558923 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-13 00:50:38.558935 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-13 00:50:38.558946 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-13 00:50:38.558957 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-13 00:50:38.558968 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-13 00:50:38.558979 | orchestrator | 2025-09-13 00:50:38.558990 | orchestrator | 2025-09-13 00:50:38.559001 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:50:38.559012 | orchestrator | Saturday 13 September 2025 00:50:35 +0000 (0:00:08.818) 0:01:18.062 **** 2025-09-13 00:50:38.559023 | orchestrator | =============================================================================== 2025-09-13 00:50:38.559039 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 21.94s 2025-09-13 00:50:38.559050 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.60s 2025-09-13 00:50:38.559061 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.26s 2025-09-13 00:50:38.559072 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.73s 2025-09-13 00:50:38.559083 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.33s 2025-09-13 00:50:38.559094 | orchestrator | openvswitch : include_tasks --------------------------------------------- 3.24s 2025-09-13 00:50:38.559104 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.18s 2025-09-13 00:50:38.559115 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.74s 2025-09-13 00:50:38.559126 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.60s 2025-09-13 00:50:38.559151 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.55s 2025-09-13 00:50:38.559162 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.45s 2025-09-13 00:50:38.559173 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.29s 2025-09-13 00:50:38.559184 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.16s 2025-09-13 00:50:38.559195 | orchestrator | module-load : Load modules ---------------------------------------------- 1.97s 2025-09-13 00:50:38.559206 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.74s 2025-09-13 00:50:38.559216 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.54s 2025-09-13 00:50:38.559227 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.19s 2025-09-13 00:50:38.559238 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.88s 2025-09-13 00:50:38.559249 | orchestrator | 2025-09-13 00:50:38 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:50:38.559259 | orchestrator | 2025-09-13 00:50:38 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:50:38.559275 | orchestrator | 2025-09-13 00:50:38 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:50:38.559286 | orchestrator | 2025-09-13 00:50:38 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:50:38.559297 | orchestrator | 2025-09-13 00:50:38 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:50:41.592071 | orchestrator | 2025-09-13 00:50:41 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:50:41.592208 | orchestrator | 2025-09-13 00:50:41 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:50:41.592787 | orchestrator | 2025-09-13 00:50:41 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:50:41.593781 | orchestrator | 2025-09-13 00:50:41 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:50:41.593803 | orchestrator | 2025-09-13 00:50:41 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:50:44.625629 | orchestrator | 2025-09-13 00:50:44 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:50:44.626266 | orchestrator | 2025-09-13 00:50:44 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:50:44.627235 | orchestrator | 2025-09-13 00:50:44 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:50:44.631243 | orchestrator | 2025-09-13 00:50:44 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:50:44.631270 | orchestrator | 2025-09-13 00:50:44 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:50:47.669460 | orchestrator | 2025-09-13 00:50:47 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:50:47.669863 | orchestrator | 2025-09-13 00:50:47 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:50:47.670559 | orchestrator | 2025-09-13 00:50:47 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:50:47.671277 | orchestrator | 2025-09-13 00:50:47 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:50:47.671374 | orchestrator | 2025-09-13 00:50:47 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:50:50.696839 | orchestrator | 2025-09-13 00:50:50 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:50:50.697542 | orchestrator | 2025-09-13 00:50:50 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:50:50.698365 | orchestrator | 2025-09-13 00:50:50 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:50:50.699085 | orchestrator | 2025-09-13 00:50:50 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:50:50.699118 | orchestrator | 2025-09-13 00:50:50 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:50:53.738771 | orchestrator | 2025-09-13 00:50:53 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:50:53.738857 | orchestrator | 2025-09-13 00:50:53 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:50:53.738866 | orchestrator | 2025-09-13 00:50:53 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:50:53.738873 | orchestrator | 2025-09-13 00:50:53 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:50:53.738880 | orchestrator | 2025-09-13 00:50:53 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:50:56.778128 | orchestrator | 2025-09-13 00:50:56 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:50:56.784706 | orchestrator | 2025-09-13 00:50:56 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:50:56.787139 | orchestrator | 2025-09-13 00:50:56 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:50:56.788675 | orchestrator | 2025-09-13 00:50:56 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:50:56.788700 | orchestrator | 2025-09-13 00:50:56 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:50:59.830669 | orchestrator | 2025-09-13 00:50:59 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:50:59.836436 | orchestrator | 2025-09-13 00:50:59 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:50:59.837773 | orchestrator | 2025-09-13 00:50:59 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:50:59.840272 | orchestrator | 2025-09-13 00:50:59 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:50:59.840298 | orchestrator | 2025-09-13 00:50:59 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:51:02.881746 | orchestrator | 2025-09-13 00:51:02 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:51:02.884813 | orchestrator | 2025-09-13 00:51:02 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:51:02.886926 | orchestrator | 2025-09-13 00:51:02 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:51:02.888818 | orchestrator | 2025-09-13 00:51:02 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:51:02.888853 | orchestrator | 2025-09-13 00:51:02 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:51:05.931144 | orchestrator | 2025-09-13 00:51:05 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:51:05.936193 | orchestrator | 2025-09-13 00:51:05 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:51:05.938823 | orchestrator | 2025-09-13 00:51:05 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:51:05.940958 | orchestrator | 2025-09-13 00:51:05 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:51:05.941486 | orchestrator | 2025-09-13 00:51:05 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:51:08.984451 | orchestrator | 2025-09-13 00:51:08 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:51:08.986512 | orchestrator | 2025-09-13 00:51:08 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:51:08.988597 | orchestrator | 2025-09-13 00:51:08 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:51:08.990279 | orchestrator | 2025-09-13 00:51:08 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:51:08.990598 | orchestrator | 2025-09-13 00:51:08 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:51:12.036057 | orchestrator | 2025-09-13 00:51:12 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:51:12.052263 | orchestrator | 2025-09-13 00:51:12 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:51:12.057082 | orchestrator | 2025-09-13 00:51:12 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:51:12.059602 | orchestrator | 2025-09-13 00:51:12 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:51:12.059633 | orchestrator | 2025-09-13 00:51:12 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:51:15.096770 | orchestrator | 2025-09-13 00:51:15 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:51:15.098955 | orchestrator | 2025-09-13 00:51:15 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:51:15.099869 | orchestrator | 2025-09-13 00:51:15 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:51:15.102680 | orchestrator | 2025-09-13 00:51:15 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:51:15.102733 | orchestrator | 2025-09-13 00:51:15 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:51:18.159217 | orchestrator | 2025-09-13 00:51:18 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:51:18.159334 | orchestrator | 2025-09-13 00:51:18 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:51:18.159636 | orchestrator | 2025-09-13 00:51:18 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:51:18.160456 | orchestrator | 2025-09-13 00:51:18 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:51:18.160480 | orchestrator | 2025-09-13 00:51:18 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:51:21.198527 | orchestrator | 2025-09-13 00:51:21 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:51:21.198628 | orchestrator | 2025-09-13 00:51:21 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:51:21.201506 | orchestrator | 2025-09-13 00:51:21 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:51:21.203631 | orchestrator | 2025-09-13 00:51:21 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:51:21.203786 | orchestrator | 2025-09-13 00:51:21 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:51:24.249615 | orchestrator | 2025-09-13 00:51:24 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:51:24.251568 | orchestrator | 2025-09-13 00:51:24 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:51:24.254010 | orchestrator | 2025-09-13 00:51:24 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:51:24.256381 | orchestrator | 2025-09-13 00:51:24 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:51:24.257780 | orchestrator | 2025-09-13 00:51:24 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:51:27.299962 | orchestrator | 2025-09-13 00:51:27 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:51:27.303420 | orchestrator | 2025-09-13 00:51:27 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:51:27.305771 | orchestrator | 2025-09-13 00:51:27 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:51:27.308445 | orchestrator | 2025-09-13 00:51:27 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:51:27.309947 | orchestrator | 2025-09-13 00:51:27 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:51:30.357568 | orchestrator | 2025-09-13 00:51:30 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:51:30.359237 | orchestrator | 2025-09-13 00:51:30 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:51:30.361082 | orchestrator | 2025-09-13 00:51:30 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:51:30.363547 | orchestrator | 2025-09-13 00:51:30 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:51:30.363993 | orchestrator | 2025-09-13 00:51:30 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:51:33.420214 | orchestrator | 2025-09-13 00:51:33 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:51:33.421442 | orchestrator | 2025-09-13 00:51:33 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:51:33.422907 | orchestrator | 2025-09-13 00:51:33 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:51:33.424503 | orchestrator | 2025-09-13 00:51:33 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:51:33.424526 | orchestrator | 2025-09-13 00:51:33 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:51:36.468839 | orchestrator | 2025-09-13 00:51:36 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:51:36.468945 | orchestrator | 2025-09-13 00:51:36 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:51:36.470441 | orchestrator | 2025-09-13 00:51:36 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:51:36.472585 | orchestrator | 2025-09-13 00:51:36 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:51:36.472822 | orchestrator | 2025-09-13 00:51:36 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:51:39.520476 | orchestrator | 2025-09-13 00:51:39 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:51:39.521566 | orchestrator | 2025-09-13 00:51:39 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:51:39.522741 | orchestrator | 2025-09-13 00:51:39 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:51:39.524015 | orchestrator | 2025-09-13 00:51:39 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:51:39.524270 | orchestrator | 2025-09-13 00:51:39 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:51:42.568303 | orchestrator | 2025-09-13 00:51:42 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:51:42.569575 | orchestrator | 2025-09-13 00:51:42 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:51:42.571714 | orchestrator | 2025-09-13 00:51:42 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:51:42.574641 | orchestrator | 2025-09-13 00:51:42 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:51:42.574819 | orchestrator | 2025-09-13 00:51:42 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:51:45.616337 | orchestrator | 2025-09-13 00:51:45 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:51:45.621115 | orchestrator | 2025-09-13 00:51:45 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:51:45.624003 | orchestrator | 2025-09-13 00:51:45 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:51:45.627343 | orchestrator | 2025-09-13 00:51:45 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:51:45.627518 | orchestrator | 2025-09-13 00:51:45 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:51:48.662080 | orchestrator | 2025-09-13 00:51:48 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:51:48.663440 | orchestrator | 2025-09-13 00:51:48 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:51:48.665025 | orchestrator | 2025-09-13 00:51:48 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:51:48.667305 | orchestrator | 2025-09-13 00:51:48 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:51:48.667338 | orchestrator | 2025-09-13 00:51:48 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:51:51.707671 | orchestrator | 2025-09-13 00:51:51 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:51:51.709352 | orchestrator | 2025-09-13 00:51:51 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:51:51.715266 | orchestrator | 2025-09-13 00:51:51 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:51:51.718492 | orchestrator | 2025-09-13 00:51:51 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:51:51.718524 | orchestrator | 2025-09-13 00:51:51 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:51:54.757814 | orchestrator | 2025-09-13 00:51:54 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:51:54.758558 | orchestrator | 2025-09-13 00:51:54 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:51:54.759120 | orchestrator | 2025-09-13 00:51:54 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:51:54.761676 | orchestrator | 2025-09-13 00:51:54 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:51:54.762404 | orchestrator | 2025-09-13 00:51:54 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:51:57.797109 | orchestrator | 2025-09-13 00:51:57 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:51:57.798229 | orchestrator | 2025-09-13 00:51:57 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:51:57.798312 | orchestrator | 2025-09-13 00:51:57 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:51:57.798337 | orchestrator | 2025-09-13 00:51:57 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:51:57.798360 | orchestrator | 2025-09-13 00:51:57 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:52:00.831477 | orchestrator | 2025-09-13 00:52:00 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:52:00.831795 | orchestrator | 2025-09-13 00:52:00 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:52:00.832467 | orchestrator | 2025-09-13 00:52:00 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:52:00.833219 | orchestrator | 2025-09-13 00:52:00 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:52:00.833244 | orchestrator | 2025-09-13 00:52:00 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:52:03.876103 | orchestrator | 2025-09-13 00:52:03 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:52:03.878170 | orchestrator | 2025-09-13 00:52:03 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:52:03.880496 | orchestrator | 2025-09-13 00:52:03 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:52:03.882661 | orchestrator | 2025-09-13 00:52:03 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:52:03.882715 | orchestrator | 2025-09-13 00:52:03 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:52:06.909448 | orchestrator | 2025-09-13 00:52:06 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:52:06.909612 | orchestrator | 2025-09-13 00:52:06 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state STARTED 2025-09-13 00:52:06.910144 | orchestrator | 2025-09-13 00:52:06 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:52:06.910718 | orchestrator | 2025-09-13 00:52:06 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:52:06.910741 | orchestrator | 2025-09-13 00:52:06 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:52:09.938310 | orchestrator | 2025-09-13 00:52:09 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:52:09.939711 | orchestrator | 2025-09-13 00:52:09.939748 | orchestrator | 2025-09-13 00:52:09.939760 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-09-13 00:52:09.939772 | orchestrator | 2025-09-13 00:52:09.939783 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-13 00:52:09.939795 | orchestrator | Saturday 13 September 2025 00:49:48 +0000 (0:00:00.355) 0:00:00.355 **** 2025-09-13 00:52:09.939806 | orchestrator | ok: [localhost] => { 2025-09-13 00:52:09.939818 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-09-13 00:52:09.939830 | orchestrator | } 2025-09-13 00:52:09.939841 | orchestrator | 2025-09-13 00:52:09.939852 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-09-13 00:52:09.939890 | orchestrator | Saturday 13 September 2025 00:49:48 +0000 (0:00:00.041) 0:00:00.396 **** 2025-09-13 00:52:09.939903 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-09-13 00:52:09.939915 | orchestrator | ...ignoring 2025-09-13 00:52:09.939927 | orchestrator | 2025-09-13 00:52:09.939938 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-09-13 00:52:09.939949 | orchestrator | Saturday 13 September 2025 00:49:51 +0000 (0:00:02.796) 0:00:03.193 **** 2025-09-13 00:52:09.939960 | orchestrator | skipping: [localhost] 2025-09-13 00:52:09.939971 | orchestrator | 2025-09-13 00:52:09.939982 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-09-13 00:52:09.939993 | orchestrator | Saturday 13 September 2025 00:49:51 +0000 (0:00:00.064) 0:00:03.257 **** 2025-09-13 00:52:09.940004 | orchestrator | ok: [localhost] 2025-09-13 00:52:09.940015 | orchestrator | 2025-09-13 00:52:09.940025 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-13 00:52:09.940036 | orchestrator | 2025-09-13 00:52:09.940047 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-13 00:52:09.940058 | orchestrator | Saturday 13 September 2025 00:49:51 +0000 (0:00:00.170) 0:00:03.428 **** 2025-09-13 00:52:09.940069 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:52:09.940079 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:52:09.940090 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:52:09.940140 | orchestrator | 2025-09-13 00:52:09.940153 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-13 00:52:09.940164 | orchestrator | Saturday 13 September 2025 00:49:52 +0000 (0:00:00.445) 0:00:03.874 **** 2025-09-13 00:52:09.940175 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-09-13 00:52:09.940213 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-09-13 00:52:09.940224 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-09-13 00:52:09.940235 | orchestrator | 2025-09-13 00:52:09.940246 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-09-13 00:52:09.940257 | orchestrator | 2025-09-13 00:52:09.940268 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-13 00:52:09.940278 | orchestrator | Saturday 13 September 2025 00:49:52 +0000 (0:00:00.861) 0:00:04.735 **** 2025-09-13 00:52:09.940289 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:52:09.940300 | orchestrator | 2025-09-13 00:52:09.940314 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-13 00:52:09.940327 | orchestrator | Saturday 13 September 2025 00:49:53 +0000 (0:00:00.649) 0:00:05.385 **** 2025-09-13 00:52:09.940339 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:52:09.940352 | orchestrator | 2025-09-13 00:52:09.940365 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-09-13 00:52:09.940377 | orchestrator | Saturday 13 September 2025 00:49:54 +0000 (0:00:01.065) 0:00:06.451 **** 2025-09-13 00:52:09.940390 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:52:09.940402 | orchestrator | 2025-09-13 00:52:09.940414 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-09-13 00:52:09.940427 | orchestrator | Saturday 13 September 2025 00:49:55 +0000 (0:00:01.059) 0:00:07.510 **** 2025-09-13 00:52:09.940439 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:52:09.940452 | orchestrator | 2025-09-13 00:52:09.940466 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-09-13 00:52:09.940487 | orchestrator | Saturday 13 September 2025 00:49:56 +0000 (0:00:00.601) 0:00:08.112 **** 2025-09-13 00:52:09.940508 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:52:09.940527 | orchestrator | 2025-09-13 00:52:09.940546 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-09-13 00:52:09.940567 | orchestrator | Saturday 13 September 2025 00:49:57 +0000 (0:00:00.907) 0:00:09.020 **** 2025-09-13 00:52:09.940601 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:52:09.940622 | orchestrator | 2025-09-13 00:52:09.940636 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-13 00:52:09.940664 | orchestrator | Saturday 13 September 2025 00:49:58 +0000 (0:00:01.728) 0:00:10.749 **** 2025-09-13 00:52:09.940675 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:52:09.940686 | orchestrator | 2025-09-13 00:52:09.940697 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-13 00:52:09.940708 | orchestrator | Saturday 13 September 2025 00:50:04 +0000 (0:00:05.285) 0:00:16.035 **** 2025-09-13 00:52:09.940719 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:52:09.940730 | orchestrator | 2025-09-13 00:52:09.940741 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-09-13 00:52:09.940752 | orchestrator | Saturday 13 September 2025 00:50:05 +0000 (0:00:00.963) 0:00:16.998 **** 2025-09-13 00:52:09.940762 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:52:09.940773 | orchestrator | 2025-09-13 00:52:09.940784 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-09-13 00:52:09.940795 | orchestrator | Saturday 13 September 2025 00:50:06 +0000 (0:00:00.792) 0:00:17.790 **** 2025-09-13 00:52:09.940806 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:52:09.940817 | orchestrator | 2025-09-13 00:52:09.940840 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-09-13 00:52:09.940852 | orchestrator | Saturday 13 September 2025 00:50:07 +0000 (0:00:00.993) 0:00:18.784 **** 2025-09-13 00:52:09.940869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-13 00:52:09.940886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-13 00:52:09.940900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-13 00:52:09.940919 | orchestrator | 2025-09-13 00:52:09.940936 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-09-13 00:52:09.940947 | orchestrator | Saturday 13 September 2025 00:50:07 +0000 (0:00:00.935) 0:00:19.720 **** 2025-09-13 00:52:09.940969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-13 00:52:09.940982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-13 00:52:09.940995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-13 00:52:09.941013 | orchestrator | 2025-09-13 00:52:09.941025 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-09-13 00:52:09.941036 | orchestrator | Saturday 13 September 2025 00:50:09 +0000 (0:00:01.683) 0:00:21.404 **** 2025-09-13 00:52:09.941047 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-13 00:52:09.941058 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-13 00:52:09.941069 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-13 00:52:09.941080 | orchestrator | 2025-09-13 00:52:09.941091 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-09-13 00:52:09.941102 | orchestrator | Saturday 13 September 2025 00:50:11 +0000 (0:00:01.613) 0:00:23.018 **** 2025-09-13 00:52:09.941113 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-13 00:52:09.941128 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-13 00:52:09.941139 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-13 00:52:09.941150 | orchestrator | 2025-09-13 00:52:09.941161 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-09-13 00:52:09.941172 | orchestrator | Saturday 13 September 2025 00:50:14 +0000 (0:00:03.507) 0:00:26.525 **** 2025-09-13 00:52:09.941204 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-13 00:52:09.941216 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-13 00:52:09.941227 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-13 00:52:09.941237 | orchestrator | 2025-09-13 00:52:09.941248 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-09-13 00:52:09.941259 | orchestrator | Saturday 13 September 2025 00:50:16 +0000 (0:00:01.519) 0:00:28.045 **** 2025-09-13 00:52:09.941276 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-13 00:52:09.941288 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-13 00:52:09.941299 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-13 00:52:09.941310 | orchestrator | 2025-09-13 00:52:09.941321 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-09-13 00:52:09.941332 | orchestrator | Saturday 13 September 2025 00:50:19 +0000 (0:00:02.858) 0:00:30.903 **** 2025-09-13 00:52:09.941342 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-13 00:52:09.941353 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-13 00:52:09.941364 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-13 00:52:09.941375 | orchestrator | 2025-09-13 00:52:09.941386 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-09-13 00:52:09.941397 | orchestrator | Saturday 13 September 2025 00:50:20 +0000 (0:00:01.569) 0:00:32.472 **** 2025-09-13 00:52:09.941408 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-13 00:52:09.941419 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-13 00:52:09.941430 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-13 00:52:09.941440 | orchestrator | 2025-09-13 00:52:09.941459 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-13 00:52:09.941470 | orchestrator | Saturday 13 September 2025 00:50:23 +0000 (0:00:02.520) 0:00:34.993 **** 2025-09-13 00:52:09.941481 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:52:09.941492 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:52:09.941503 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:52:09.941514 | orchestrator | 2025-09-13 00:52:09.941524 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-09-13 00:52:09.941536 | orchestrator | Saturday 13 September 2025 00:50:23 +0000 (0:00:00.505) 0:00:35.499 **** 2025-09-13 00:52:09.941548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-13 00:52:09.941566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-13 00:52:09.941587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-13 00:52:09.941599 | orchestrator | 2025-09-13 00:52:09.941610 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-09-13 00:52:09.941621 | orchestrator | Saturday 13 September 2025 00:50:25 +0000 (0:00:01.683) 0:00:37.182 **** 2025-09-13 00:52:09.941639 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:52:09.941650 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:52:09.941661 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:52:09.941672 | orchestrator | 2025-09-13 00:52:09.941683 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-09-13 00:52:09.941694 | orchestrator | Saturday 13 September 2025 00:50:26 +0000 (0:00:00.934) 0:00:38.116 **** 2025-09-13 00:52:09.941705 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:52:09.941716 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:52:09.941727 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:52:09.941738 | orchestrator | 2025-09-13 00:52:09.941749 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-09-13 00:52:09.941760 | orchestrator | Saturday 13 September 2025 00:50:34 +0000 (0:00:07.865) 0:00:45.982 **** 2025-09-13 00:52:09.941771 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:52:09.941782 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:52:09.941792 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:52:09.941803 | orchestrator | 2025-09-13 00:52:09.941814 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-13 00:52:09.941825 | orchestrator | 2025-09-13 00:52:09.941836 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-13 00:52:09.941846 | orchestrator | Saturday 13 September 2025 00:50:34 +0000 (0:00:00.487) 0:00:46.469 **** 2025-09-13 00:52:09.941857 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:52:09.941868 | orchestrator | 2025-09-13 00:52:09.941879 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-13 00:52:09.941890 | orchestrator | Saturday 13 September 2025 00:50:35 +0000 (0:00:00.619) 0:00:47.088 **** 2025-09-13 00:52:09.941901 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:52:09.941912 | orchestrator | 2025-09-13 00:52:09.941923 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-13 00:52:09.941934 | orchestrator | Saturday 13 September 2025 00:50:35 +0000 (0:00:00.211) 0:00:47.300 **** 2025-09-13 00:52:09.941945 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:52:09.941955 | orchestrator | 2025-09-13 00:52:09.941966 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-13 00:52:09.941977 | orchestrator | Saturday 13 September 2025 00:50:37 +0000 (0:00:01.640) 0:00:48.940 **** 2025-09-13 00:52:09.941988 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:52:09.941999 | orchestrator | 2025-09-13 00:52:09.942010 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-13 00:52:09.942067 | orchestrator | 2025-09-13 00:52:09.942078 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-13 00:52:09.942090 | orchestrator | Saturday 13 September 2025 00:51:33 +0000 (0:00:56.039) 0:01:44.979 **** 2025-09-13 00:52:09.942101 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:52:09.942147 | orchestrator | 2025-09-13 00:52:09.942159 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-13 00:52:09.942171 | orchestrator | Saturday 13 September 2025 00:51:33 +0000 (0:00:00.651) 0:01:45.631 **** 2025-09-13 00:52:09.942209 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:52:09.942220 | orchestrator | 2025-09-13 00:52:09.942231 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-13 00:52:09.942243 | orchestrator | Saturday 13 September 2025 00:51:34 +0000 (0:00:00.240) 0:01:45.871 **** 2025-09-13 00:52:09.942254 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:52:09.942265 | orchestrator | 2025-09-13 00:52:09.942275 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-13 00:52:09.942286 | orchestrator | Saturday 13 September 2025 00:51:35 +0000 (0:00:01.735) 0:01:47.607 **** 2025-09-13 00:52:09.942297 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:52:09.942308 | orchestrator | 2025-09-13 00:52:09.942325 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-13 00:52:09.942345 | orchestrator | 2025-09-13 00:52:09.942356 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-13 00:52:09.942367 | orchestrator | Saturday 13 September 2025 00:51:49 +0000 (0:00:13.790) 0:02:01.397 **** 2025-09-13 00:52:09.942378 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:52:09.942389 | orchestrator | 2025-09-13 00:52:09.942400 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-13 00:52:09.942411 | orchestrator | Saturday 13 September 2025 00:51:50 +0000 (0:00:00.579) 0:02:01.977 **** 2025-09-13 00:52:09.942422 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:52:09.942432 | orchestrator | 2025-09-13 00:52:09.942443 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-13 00:52:09.942454 | orchestrator | Saturday 13 September 2025 00:51:50 +0000 (0:00:00.205) 0:02:02.183 **** 2025-09-13 00:52:09.942465 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:52:09.942476 | orchestrator | 2025-09-13 00:52:09.942487 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-13 00:52:09.942506 | orchestrator | Saturday 13 September 2025 00:51:56 +0000 (0:00:06.483) 0:02:08.666 **** 2025-09-13 00:52:09.942517 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:52:09.942528 | orchestrator | 2025-09-13 00:52:09.942539 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-09-13 00:52:09.942550 | orchestrator | 2025-09-13 00:52:09.942561 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-09-13 00:52:09.942572 | orchestrator | Saturday 13 September 2025 00:52:06 +0000 (0:00:09.432) 0:02:18.099 **** 2025-09-13 00:52:09.942583 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:52:09.942594 | orchestrator | 2025-09-13 00:52:09.942605 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-09-13 00:52:09.942615 | orchestrator | Saturday 13 September 2025 00:52:06 +0000 (0:00:00.649) 0:02:18.748 **** 2025-09-13 00:52:09.942626 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-13 00:52:09.942637 | orchestrator | enable_outward_rabbitmq_True 2025-09-13 00:52:09.942648 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-13 00:52:09.942659 | orchestrator | outward_rabbitmq_restart 2025-09-13 00:52:09.942670 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:52:09.942681 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:52:09.942692 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:52:09.942702 | orchestrator | 2025-09-13 00:52:09.942713 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-09-13 00:52:09.942724 | orchestrator | skipping: no hosts matched 2025-09-13 00:52:09.942735 | orchestrator | 2025-09-13 00:52:09.942746 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-09-13 00:52:09.942757 | orchestrator | skipping: no hosts matched 2025-09-13 00:52:09.942768 | orchestrator | 2025-09-13 00:52:09.942779 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-09-13 00:52:09.942790 | orchestrator | skipping: no hosts matched 2025-09-13 00:52:09.942801 | orchestrator | 2025-09-13 00:52:09.942812 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:52:09.942823 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-13 00:52:09.942835 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-13 00:52:09.942846 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-13 00:52:09.942857 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-13 00:52:09.942868 | orchestrator | 2025-09-13 00:52:09.942886 | orchestrator | 2025-09-13 00:52:09.942897 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:52:09.942908 | orchestrator | Saturday 13 September 2025 00:52:09 +0000 (0:00:02.572) 0:02:21.320 **** 2025-09-13 00:52:09.942919 | orchestrator | =============================================================================== 2025-09-13 00:52:09.942930 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 79.26s 2025-09-13 00:52:09.942940 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 9.86s 2025-09-13 00:52:09.942951 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.87s 2025-09-13 00:52:09.942962 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 5.29s 2025-09-13 00:52:09.942973 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.51s 2025-09-13 00:52:09.942984 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.86s 2025-09-13 00:52:09.942995 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.80s 2025-09-13 00:52:09.943006 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.57s 2025-09-13 00:52:09.943017 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.52s 2025-09-13 00:52:09.943027 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.85s 2025-09-13 00:52:09.943038 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 1.73s 2025-09-13 00:52:09.943049 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.68s 2025-09-13 00:52:09.943060 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.68s 2025-09-13 00:52:09.943076 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.61s 2025-09-13 00:52:09.943087 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.57s 2025-09-13 00:52:09.943098 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.52s 2025-09-13 00:52:09.943109 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.07s 2025-09-13 00:52:09.943119 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 1.06s 2025-09-13 00:52:09.943130 | orchestrator | rabbitmq : Remove ha-all policy from RabbitMQ --------------------------- 0.99s 2025-09-13 00:52:09.943141 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.96s 2025-09-13 00:52:09.943152 | orchestrator | 2025-09-13 00:52:09 | INFO  | Task 9dd7142e-85cd-4034-83e5-3b960a8744b6 is in state SUCCESS 2025-09-13 00:52:09.943163 | orchestrator | 2025-09-13 00:52:09 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:52:09.943225 | orchestrator | 2025-09-13 00:52:09 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:52:09.943239 | orchestrator | 2025-09-13 00:52:09 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:52:12.970498 | orchestrator | 2025-09-13 00:52:12 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:52:12.970608 | orchestrator | 2025-09-13 00:52:12 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:52:12.971262 | orchestrator | 2025-09-13 00:52:12 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:52:12.971358 | orchestrator | 2025-09-13 00:52:12 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:52:16.020785 | orchestrator | 2025-09-13 00:52:16 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:52:16.024542 | orchestrator | 2025-09-13 00:52:16 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:52:16.026206 | orchestrator | 2025-09-13 00:52:16 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:52:16.026597 | orchestrator | 2025-09-13 00:52:16 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:52:19.063135 | orchestrator | 2025-09-13 00:52:19 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:52:19.064536 | orchestrator | 2025-09-13 00:52:19 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:52:19.066270 | orchestrator | 2025-09-13 00:52:19 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:52:19.066563 | orchestrator | 2025-09-13 00:52:19 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:52:22.099587 | orchestrator | 2025-09-13 00:52:22 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:52:22.101357 | orchestrator | 2025-09-13 00:52:22 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:52:22.103270 | orchestrator | 2025-09-13 00:52:22 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:52:22.103508 | orchestrator | 2025-09-13 00:52:22 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:52:25.140713 | orchestrator | 2025-09-13 00:52:25 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:52:25.141129 | orchestrator | 2025-09-13 00:52:25 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:52:25.142917 | orchestrator | 2025-09-13 00:52:25 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:52:25.143157 | orchestrator | 2025-09-13 00:52:25 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:52:28.173980 | orchestrator | 2025-09-13 00:52:28 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:52:28.175241 | orchestrator | 2025-09-13 00:52:28 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:52:28.177981 | orchestrator | 2025-09-13 00:52:28 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:52:28.178931 | orchestrator | 2025-09-13 00:52:28 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:52:31.207158 | orchestrator | 2025-09-13 00:52:31 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:52:31.207300 | orchestrator | 2025-09-13 00:52:31 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:52:31.207917 | orchestrator | 2025-09-13 00:52:31 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:52:31.208349 | orchestrator | 2025-09-13 00:52:31 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:52:34.250815 | orchestrator | 2025-09-13 00:52:34 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:52:34.253115 | orchestrator | 2025-09-13 00:52:34 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:52:34.255949 | orchestrator | 2025-09-13 00:52:34 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:52:34.256061 | orchestrator | 2025-09-13 00:52:34 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:52:37.295678 | orchestrator | 2025-09-13 00:52:37 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:52:37.298290 | orchestrator | 2025-09-13 00:52:37 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:52:37.300504 | orchestrator | 2025-09-13 00:52:37 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:52:37.300851 | orchestrator | 2025-09-13 00:52:37 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:52:40.333477 | orchestrator | 2025-09-13 00:52:40 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:52:40.334590 | orchestrator | 2025-09-13 00:52:40 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:52:40.335807 | orchestrator | 2025-09-13 00:52:40 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:52:40.335937 | orchestrator | 2025-09-13 00:52:40 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:52:43.384170 | orchestrator | 2025-09-13 00:52:43 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:52:43.384634 | orchestrator | 2025-09-13 00:52:43 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:52:43.389429 | orchestrator | 2025-09-13 00:52:43 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:52:43.392024 | orchestrator | 2025-09-13 00:52:43 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:52:46.442963 | orchestrator | 2025-09-13 00:52:46 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:52:46.446845 | orchestrator | 2025-09-13 00:52:46 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:52:46.449629 | orchestrator | 2025-09-13 00:52:46 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:52:46.449912 | orchestrator | 2025-09-13 00:52:46 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:52:49.493559 | orchestrator | 2025-09-13 00:52:49 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:52:49.496714 | orchestrator | 2025-09-13 00:52:49 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:52:49.498126 | orchestrator | 2025-09-13 00:52:49 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:52:49.498362 | orchestrator | 2025-09-13 00:52:49 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:52:52.544128 | orchestrator | 2025-09-13 00:52:52 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:52:52.549558 | orchestrator | 2025-09-13 00:52:52 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:52:52.550762 | orchestrator | 2025-09-13 00:52:52 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:52:52.551005 | orchestrator | 2025-09-13 00:52:52 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:52:55.592299 | orchestrator | 2025-09-13 00:52:55 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:52:55.593995 | orchestrator | 2025-09-13 00:52:55 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:52:55.601478 | orchestrator | 2025-09-13 00:52:55 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:52:55.601504 | orchestrator | 2025-09-13 00:52:55 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:52:58.643647 | orchestrator | 2025-09-13 00:52:58 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:52:58.644700 | orchestrator | 2025-09-13 00:52:58 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:52:58.645657 | orchestrator | 2025-09-13 00:52:58 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:52:58.645688 | orchestrator | 2025-09-13 00:52:58 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:53:01.690939 | orchestrator | 2025-09-13 00:53:01 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:53:01.691963 | orchestrator | 2025-09-13 00:53:01 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:53:01.692826 | orchestrator | 2025-09-13 00:53:01 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:53:01.694590 | orchestrator | 2025-09-13 00:53:01 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:53:04.747333 | orchestrator | 2025-09-13 00:53:04 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:53:04.750702 | orchestrator | 2025-09-13 00:53:04 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:53:04.753492 | orchestrator | 2025-09-13 00:53:04 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:53:04.753511 | orchestrator | 2025-09-13 00:53:04 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:53:07.786421 | orchestrator | 2025-09-13 00:53:07 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state STARTED 2025-09-13 00:53:07.788485 | orchestrator | 2025-09-13 00:53:07 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:53:07.790531 | orchestrator | 2025-09-13 00:53:07 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:53:07.791306 | orchestrator | 2025-09-13 00:53:07 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:53:10.825815 | orchestrator | 2025-09-13 00:53:10 | INFO  | Task bfcd39e5-4fd9-4656-b8e9-eac4bc7302a4 is in state SUCCESS 2025-09-13 00:53:10.826970 | orchestrator | 2025-09-13 00:53:10.827013 | orchestrator | 2025-09-13 00:53:10.827026 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-13 00:53:10.827099 | orchestrator | 2025-09-13 00:53:10.827113 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-13 00:53:10.827124 | orchestrator | Saturday 13 September 2025 00:50:39 +0000 (0:00:00.163) 0:00:00.163 **** 2025-09-13 00:53:10.827136 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:53:10.827257 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:53:10.827270 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:53:10.827281 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:53:10.827291 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:53:10.827302 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:53:10.827313 | orchestrator | 2025-09-13 00:53:10.827324 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-13 00:53:10.827335 | orchestrator | Saturday 13 September 2025 00:50:40 +0000 (0:00:00.812) 0:00:00.975 **** 2025-09-13 00:53:10.827346 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-09-13 00:53:10.827385 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-09-13 00:53:10.827397 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-09-13 00:53:10.827409 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-09-13 00:53:10.827420 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-09-13 00:53:10.827431 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-09-13 00:53:10.827442 | orchestrator | 2025-09-13 00:53:10.827454 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-09-13 00:53:10.827465 | orchestrator | 2025-09-13 00:53:10.827477 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-09-13 00:53:10.827488 | orchestrator | Saturday 13 September 2025 00:50:41 +0000 (0:00:01.043) 0:00:02.018 **** 2025-09-13 00:53:10.827500 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:53:10.827513 | orchestrator | 2025-09-13 00:53:10.827525 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-09-13 00:53:10.827559 | orchestrator | Saturday 13 September 2025 00:50:42 +0000 (0:00:01.167) 0:00:03.186 **** 2025-09-13 00:53:10.827577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.827592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.827618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.827632 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.827647 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.827660 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.827673 | orchestrator | 2025-09-13 00:53:10.827700 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-09-13 00:53:10.827714 | orchestrator | Saturday 13 September 2025 00:50:44 +0000 (0:00:01.295) 0:00:04.481 **** 2025-09-13 00:53:10.827727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.827741 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.827754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.827775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.827789 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.827836 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.827851 | orchestrator | 2025-09-13 00:53:10.827888 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-09-13 00:53:10.827903 | orchestrator | Saturday 13 September 2025 00:50:45 +0000 (0:00:01.649) 0:00:06.130 **** 2025-09-13 00:53:10.827916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.827927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.827957 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.827970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.827982 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.828002 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.828014 | orchestrator | 2025-09-13 00:53:10.828025 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-09-13 00:53:10.828037 | orchestrator | Saturday 13 September 2025 00:50:46 +0000 (0:00:01.197) 0:00:07.328 **** 2025-09-13 00:53:10.828048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.828060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.828077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.828089 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.828100 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.828111 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.828122 | orchestrator | 2025-09-13 00:53:10.828138 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-09-13 00:53:10.828150 | orchestrator | Saturday 13 September 2025 00:50:48 +0000 (0:00:01.460) 0:00:08.789 **** 2025-09-13 00:53:10.828161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.828179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.828210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.828222 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.828233 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.828249 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.828260 | orchestrator | 2025-09-13 00:53:10.828271 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-09-13 00:53:10.828282 | orchestrator | Saturday 13 September 2025 00:50:49 +0000 (0:00:01.307) 0:00:10.097 **** 2025-09-13 00:53:10.828294 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:53:10.828305 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:53:10.828315 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:53:10.828326 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:53:10.828337 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:53:10.828348 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:53:10.828358 | orchestrator | 2025-09-13 00:53:10.828369 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-09-13 00:53:10.828380 | orchestrator | Saturday 13 September 2025 00:50:52 +0000 (0:00:02.800) 0:00:12.897 **** 2025-09-13 00:53:10.828391 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-09-13 00:53:10.828402 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-09-13 00:53:10.828413 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-09-13 00:53:10.828424 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-09-13 00:53:10.828435 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-09-13 00:53:10.828445 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-09-13 00:53:10.828462 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-13 00:53:10.828473 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-13 00:53:10.828489 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-13 00:53:10.828500 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-13 00:53:10.828511 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-13 00:53:10.828522 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-13 00:53:10.828532 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-13 00:53:10.828545 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-13 00:53:10.828556 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-13 00:53:10.828567 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-13 00:53:10.828577 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-13 00:53:10.828588 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-13 00:53:10.828599 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-13 00:53:10.828610 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-13 00:53:10.828621 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-13 00:53:10.828632 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-13 00:53:10.828642 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-13 00:53:10.828653 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-13 00:53:10.828664 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-13 00:53:10.828674 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-13 00:53:10.828685 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-13 00:53:10.828695 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-13 00:53:10.828706 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-13 00:53:10.828716 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-13 00:53:10.828727 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-13 00:53:10.828743 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-13 00:53:10.828754 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-13 00:53:10.828765 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-13 00:53:10.828775 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-13 00:53:10.828786 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-13 00:53:10.828802 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-13 00:53:10.828813 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-13 00:53:10.828824 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-13 00:53:10.828835 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-13 00:53:10.828846 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-13 00:53:10.828856 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-13 00:53:10.828867 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-09-13 00:53:10.828878 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-09-13 00:53:10.828894 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-09-13 00:53:10.828905 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-09-13 00:53:10.828924 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-09-13 00:53:10.828941 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-09-13 00:53:10.828952 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-13 00:53:10.828962 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-13 00:53:10.828973 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-13 00:53:10.828984 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-13 00:53:10.828994 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-13 00:53:10.829005 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-13 00:53:10.829016 | orchestrator | 2025-09-13 00:53:10.829027 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-13 00:53:10.829038 | orchestrator | Saturday 13 September 2025 00:51:10 +0000 (0:00:17.846) 0:00:30.744 **** 2025-09-13 00:53:10.829048 | orchestrator | 2025-09-13 00:53:10.829059 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-13 00:53:10.829070 | orchestrator | Saturday 13 September 2025 00:51:10 +0000 (0:00:00.308) 0:00:31.052 **** 2025-09-13 00:53:10.829080 | orchestrator | 2025-09-13 00:53:10.829091 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-13 00:53:10.829101 | orchestrator | Saturday 13 September 2025 00:51:10 +0000 (0:00:00.070) 0:00:31.123 **** 2025-09-13 00:53:10.829112 | orchestrator | 2025-09-13 00:53:10.829123 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-13 00:53:10.829133 | orchestrator | Saturday 13 September 2025 00:51:10 +0000 (0:00:00.072) 0:00:31.195 **** 2025-09-13 00:53:10.829144 | orchestrator | 2025-09-13 00:53:10.829154 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-13 00:53:10.829172 | orchestrator | Saturday 13 September 2025 00:51:10 +0000 (0:00:00.071) 0:00:31.267 **** 2025-09-13 00:53:10.829183 | orchestrator | 2025-09-13 00:53:10.829223 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-13 00:53:10.829235 | orchestrator | Saturday 13 September 2025 00:51:10 +0000 (0:00:00.073) 0:00:31.340 **** 2025-09-13 00:53:10.829245 | orchestrator | 2025-09-13 00:53:10.829256 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-09-13 00:53:10.829267 | orchestrator | Saturday 13 September 2025 00:51:11 +0000 (0:00:00.075) 0:00:31.416 **** 2025-09-13 00:53:10.829277 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:53:10.829288 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:53:10.829299 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:53:10.829309 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:53:10.829320 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:53:10.829336 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:53:10.829347 | orchestrator | 2025-09-13 00:53:10.829358 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-09-13 00:53:10.829369 | orchestrator | Saturday 13 September 2025 00:51:12 +0000 (0:00:01.478) 0:00:32.894 **** 2025-09-13 00:53:10.829380 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:53:10.829391 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:53:10.829402 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:53:10.829412 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:53:10.829423 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:53:10.829434 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:53:10.829444 | orchestrator | 2025-09-13 00:53:10.829455 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-09-13 00:53:10.829466 | orchestrator | 2025-09-13 00:53:10.829477 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-13 00:53:10.829488 | orchestrator | Saturday 13 September 2025 00:51:47 +0000 (0:00:35.435) 0:01:08.330 **** 2025-09-13 00:53:10.829498 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:53:10.829509 | orchestrator | 2025-09-13 00:53:10.829520 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-13 00:53:10.829530 | orchestrator | Saturday 13 September 2025 00:51:48 +0000 (0:00:00.718) 0:01:09.048 **** 2025-09-13 00:53:10.829541 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:53:10.829552 | orchestrator | 2025-09-13 00:53:10.829562 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-09-13 00:53:10.829573 | orchestrator | Saturday 13 September 2025 00:51:49 +0000 (0:00:00.516) 0:01:09.565 **** 2025-09-13 00:53:10.829584 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:53:10.829594 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:53:10.829605 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:53:10.829615 | orchestrator | 2025-09-13 00:53:10.829626 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-09-13 00:53:10.829637 | orchestrator | Saturday 13 September 2025 00:51:49 +0000 (0:00:00.806) 0:01:10.372 **** 2025-09-13 00:53:10.829648 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:53:10.829658 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:53:10.829669 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:53:10.829685 | orchestrator | 2025-09-13 00:53:10.829696 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-09-13 00:53:10.829707 | orchestrator | Saturday 13 September 2025 00:51:50 +0000 (0:00:00.333) 0:01:10.706 **** 2025-09-13 00:53:10.829718 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:53:10.829728 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:53:10.829739 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:53:10.829750 | orchestrator | 2025-09-13 00:53:10.829760 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-09-13 00:53:10.829771 | orchestrator | Saturday 13 September 2025 00:51:50 +0000 (0:00:00.287) 0:01:10.993 **** 2025-09-13 00:53:10.829789 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:53:10.829800 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:53:10.829810 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:53:10.829821 | orchestrator | 2025-09-13 00:53:10.829832 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-09-13 00:53:10.829842 | orchestrator | Saturday 13 September 2025 00:51:50 +0000 (0:00:00.263) 0:01:11.256 **** 2025-09-13 00:53:10.829853 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:53:10.829864 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:53:10.829874 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:53:10.829885 | orchestrator | 2025-09-13 00:53:10.829896 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-09-13 00:53:10.829907 | orchestrator | Saturday 13 September 2025 00:51:51 +0000 (0:00:00.398) 0:01:11.655 **** 2025-09-13 00:53:10.829918 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:53:10.829928 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:53:10.829939 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:53:10.829950 | orchestrator | 2025-09-13 00:53:10.829961 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-09-13 00:53:10.829972 | orchestrator | Saturday 13 September 2025 00:51:51 +0000 (0:00:00.254) 0:01:11.909 **** 2025-09-13 00:53:10.829983 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:53:10.829993 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:53:10.830004 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:53:10.830062 | orchestrator | 2025-09-13 00:53:10.830076 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-09-13 00:53:10.830087 | orchestrator | Saturday 13 September 2025 00:51:51 +0000 (0:00:00.302) 0:01:12.212 **** 2025-09-13 00:53:10.830098 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:53:10.830109 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:53:10.830119 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:53:10.830130 | orchestrator | 2025-09-13 00:53:10.830141 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-09-13 00:53:10.830152 | orchestrator | Saturday 13 September 2025 00:51:52 +0000 (0:00:00.338) 0:01:12.551 **** 2025-09-13 00:53:10.830163 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:53:10.830174 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:53:10.830184 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:53:10.830244 | orchestrator | 2025-09-13 00:53:10.830256 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-09-13 00:53:10.830267 | orchestrator | Saturday 13 September 2025 00:51:52 +0000 (0:00:00.596) 0:01:13.148 **** 2025-09-13 00:53:10.830278 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:53:10.830289 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:53:10.830300 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:53:10.830310 | orchestrator | 2025-09-13 00:53:10.830321 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-09-13 00:53:10.830332 | orchestrator | Saturday 13 September 2025 00:51:53 +0000 (0:00:00.335) 0:01:13.483 **** 2025-09-13 00:53:10.830343 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:53:10.830354 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:53:10.830365 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:53:10.830376 | orchestrator | 2025-09-13 00:53:10.830387 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-09-13 00:53:10.830409 | orchestrator | Saturday 13 September 2025 00:51:53 +0000 (0:00:00.281) 0:01:13.765 **** 2025-09-13 00:53:10.830421 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:53:10.830431 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:53:10.830442 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:53:10.830453 | orchestrator | 2025-09-13 00:53:10.830464 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-09-13 00:53:10.830474 | orchestrator | Saturday 13 September 2025 00:51:53 +0000 (0:00:00.292) 0:01:14.057 **** 2025-09-13 00:53:10.830485 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:53:10.830503 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:53:10.830512 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:53:10.830522 | orchestrator | 2025-09-13 00:53:10.830532 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-09-13 00:53:10.830541 | orchestrator | Saturday 13 September 2025 00:51:53 +0000 (0:00:00.281) 0:01:14.339 **** 2025-09-13 00:53:10.830551 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:53:10.830560 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:53:10.830570 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:53:10.830579 | orchestrator | 2025-09-13 00:53:10.830589 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-09-13 00:53:10.830598 | orchestrator | Saturday 13 September 2025 00:51:54 +0000 (0:00:00.526) 0:01:14.866 **** 2025-09-13 00:53:10.830608 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:53:10.830618 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:53:10.830627 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:53:10.830637 | orchestrator | 2025-09-13 00:53:10.830646 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-09-13 00:53:10.830656 | orchestrator | Saturday 13 September 2025 00:51:54 +0000 (0:00:00.386) 0:01:15.253 **** 2025-09-13 00:53:10.830666 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:53:10.830675 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:53:10.830685 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:53:10.830694 | orchestrator | 2025-09-13 00:53:10.830704 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-09-13 00:53:10.830713 | orchestrator | Saturday 13 September 2025 00:51:55 +0000 (0:00:00.402) 0:01:15.656 **** 2025-09-13 00:53:10.830723 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:53:10.830733 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:53:10.830748 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:53:10.830758 | orchestrator | 2025-09-13 00:53:10.830767 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-13 00:53:10.830777 | orchestrator | Saturday 13 September 2025 00:51:55 +0000 (0:00:00.345) 0:01:16.001 **** 2025-09-13 00:53:10.830786 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:53:10.830796 | orchestrator | 2025-09-13 00:53:10.830805 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-09-13 00:53:10.830815 | orchestrator | Saturday 13 September 2025 00:51:56 +0000 (0:00:01.226) 0:01:17.227 **** 2025-09-13 00:53:10.830824 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:53:10.830834 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:53:10.830844 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:53:10.830853 | orchestrator | 2025-09-13 00:53:10.830863 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-09-13 00:53:10.830872 | orchestrator | Saturday 13 September 2025 00:51:57 +0000 (0:00:00.685) 0:01:17.913 **** 2025-09-13 00:53:10.830882 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:53:10.830892 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:53:10.830901 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:53:10.830911 | orchestrator | 2025-09-13 00:53:10.830920 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-09-13 00:53:10.830930 | orchestrator | Saturday 13 September 2025 00:51:58 +0000 (0:00:00.627) 0:01:18.540 **** 2025-09-13 00:53:10.830940 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:53:10.830949 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:53:10.830959 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:53:10.830968 | orchestrator | 2025-09-13 00:53:10.830978 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-09-13 00:53:10.830987 | orchestrator | Saturday 13 September 2025 00:51:58 +0000 (0:00:00.620) 0:01:19.161 **** 2025-09-13 00:53:10.830997 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:53:10.831006 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:53:10.831016 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:53:10.831031 | orchestrator | 2025-09-13 00:53:10.831041 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-09-13 00:53:10.831050 | orchestrator | Saturday 13 September 2025 00:51:59 +0000 (0:00:00.393) 0:01:19.554 **** 2025-09-13 00:53:10.831060 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:53:10.831069 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:53:10.831079 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:53:10.831088 | orchestrator | 2025-09-13 00:53:10.831098 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-09-13 00:53:10.831108 | orchestrator | Saturday 13 September 2025 00:51:59 +0000 (0:00:00.371) 0:01:19.926 **** 2025-09-13 00:53:10.831117 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:53:10.831127 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:53:10.831136 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:53:10.831146 | orchestrator | 2025-09-13 00:53:10.831155 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-09-13 00:53:10.831165 | orchestrator | Saturday 13 September 2025 00:51:59 +0000 (0:00:00.407) 0:01:20.333 **** 2025-09-13 00:53:10.831174 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:53:10.831184 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:53:10.831208 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:53:10.831218 | orchestrator | 2025-09-13 00:53:10.831228 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-09-13 00:53:10.831237 | orchestrator | Saturday 13 September 2025 00:52:00 +0000 (0:00:00.553) 0:01:20.886 **** 2025-09-13 00:53:10.831247 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:53:10.831256 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:53:10.831266 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:53:10.831276 | orchestrator | 2025-09-13 00:53:10.831290 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-13 00:53:10.831300 | orchestrator | Saturday 13 September 2025 00:52:00 +0000 (0:00:00.364) 0:01:21.251 **** 2025-09-13 00:53:10.831310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.831322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.831332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.831349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.831361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.831377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.831387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.831397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.831407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.831417 | orchestrator | 2025-09-13 00:53:10.831427 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-13 00:53:10.831436 | orchestrator | Saturday 13 September 2025 00:52:02 +0000 (0:00:01.413) 0:01:22.665 **** 2025-09-13 00:53:10.831451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.831461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.831471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.831481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.831496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.831506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.831522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.831532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.831542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.831552 | orchestrator | 2025-09-13 00:53:10.831562 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-13 00:53:10.831571 | orchestrator | Saturday 13 September 2025 00:52:06 +0000 (0:00:03.998) 0:01:26.664 **** 2025-09-13 00:53:10.831581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.831595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.831606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.831616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.831626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.831641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.831657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.831667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.831677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.831687 | orchestrator | 2025-09-13 00:53:10.831696 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-13 00:53:10.831706 | orchestrator | Saturday 13 September 2025 00:52:08 +0000 (0:00:02.285) 0:01:28.949 **** 2025-09-13 00:53:10.831716 | orchestrator | 2025-09-13 00:53:10.831725 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-13 00:53:10.831735 | orchestrator | Saturday 13 September 2025 00:52:08 +0000 (0:00:00.064) 0:01:29.013 **** 2025-09-13 00:53:10.831744 | orchestrator | 2025-09-13 00:53:10.831754 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-13 00:53:10.831763 | orchestrator | Saturday 13 September 2025 00:52:08 +0000 (0:00:00.073) 0:01:29.087 **** 2025-09-13 00:53:10.831773 | orchestrator | 2025-09-13 00:53:10.831782 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-13 00:53:10.831792 | orchestrator | Saturday 13 September 2025 00:52:08 +0000 (0:00:00.067) 0:01:29.154 **** 2025-09-13 00:53:10.831801 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:53:10.831811 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:53:10.831820 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:53:10.831830 | orchestrator | 2025-09-13 00:53:10.831839 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-13 00:53:10.831849 | orchestrator | Saturday 13 September 2025 00:52:16 +0000 (0:00:07.389) 0:01:36.544 **** 2025-09-13 00:53:10.831859 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:53:10.831868 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:53:10.831878 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:53:10.831887 | orchestrator | 2025-09-13 00:53:10.831896 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-13 00:53:10.831906 | orchestrator | Saturday 13 September 2025 00:52:23 +0000 (0:00:07.241) 0:01:43.786 **** 2025-09-13 00:53:10.831915 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:53:10.831929 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:53:10.831939 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:53:10.831948 | orchestrator | 2025-09-13 00:53:10.831958 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-13 00:53:10.831968 | orchestrator | Saturday 13 September 2025 00:52:30 +0000 (0:00:07.362) 0:01:51.149 **** 2025-09-13 00:53:10.831977 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:53:10.831992 | orchestrator | 2025-09-13 00:53:10.832002 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-13 00:53:10.832011 | orchestrator | Saturday 13 September 2025 00:52:31 +0000 (0:00:00.235) 0:01:51.385 **** 2025-09-13 00:53:10.832021 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:53:10.832030 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:53:10.832040 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:53:10.832049 | orchestrator | 2025-09-13 00:53:10.832059 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-13 00:53:10.832068 | orchestrator | Saturday 13 September 2025 00:52:31 +0000 (0:00:00.787) 0:01:52.173 **** 2025-09-13 00:53:10.832078 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:53:10.832087 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:53:10.832097 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:53:10.832106 | orchestrator | 2025-09-13 00:53:10.832116 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-13 00:53:10.832126 | orchestrator | Saturday 13 September 2025 00:52:32 +0000 (0:00:00.727) 0:01:52.900 **** 2025-09-13 00:53:10.832135 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:53:10.832145 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:53:10.832154 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:53:10.832164 | orchestrator | 2025-09-13 00:53:10.832173 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-13 00:53:10.832183 | orchestrator | Saturday 13 September 2025 00:52:33 +0000 (0:00:00.767) 0:01:53.668 **** 2025-09-13 00:53:10.832208 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:53:10.832218 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:53:10.832228 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:53:10.832237 | orchestrator | 2025-09-13 00:53:10.832247 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-13 00:53:10.832256 | orchestrator | Saturday 13 September 2025 00:52:33 +0000 (0:00:00.647) 0:01:54.316 **** 2025-09-13 00:53:10.832266 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:53:10.832275 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:53:10.832290 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:53:10.832300 | orchestrator | 2025-09-13 00:53:10.832310 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-13 00:53:10.832320 | orchestrator | Saturday 13 September 2025 00:52:34 +0000 (0:00:00.959) 0:01:55.275 **** 2025-09-13 00:53:10.832329 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:53:10.832338 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:53:10.832348 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:53:10.832357 | orchestrator | 2025-09-13 00:53:10.832367 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-09-13 00:53:10.832377 | orchestrator | Saturday 13 September 2025 00:52:35 +0000 (0:00:00.791) 0:01:56.067 **** 2025-09-13 00:53:10.832386 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:53:10.832396 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:53:10.832405 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:53:10.832414 | orchestrator | 2025-09-13 00:53:10.832424 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-13 00:53:10.832433 | orchestrator | Saturday 13 September 2025 00:52:35 +0000 (0:00:00.287) 0:01:56.354 **** 2025-09-13 00:53:10.832443 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.832453 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.832463 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.832479 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.832493 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.832504 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.832514 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.832524 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.832540 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.832550 | orchestrator | 2025-09-13 00:53:10.832560 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-13 00:53:10.832569 | orchestrator | Saturday 13 September 2025 00:52:37 +0000 (0:00:01.429) 0:01:57.783 **** 2025-09-13 00:53:10.832579 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.832589 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.832599 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.832615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.832625 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.832639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.832649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.832659 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.832669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.832679 | orchestrator | 2025-09-13 00:53:10.832689 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-13 00:53:10.832698 | orchestrator | Saturday 13 September 2025 00:52:42 +0000 (0:00:05.045) 0:02:02.829 **** 2025-09-13 00:53:10.832714 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.832724 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.832734 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.832753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.832763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.832773 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.832782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.832797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.832807 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 00:53:10.832817 | orchestrator | 2025-09-13 00:53:10.832827 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-13 00:53:10.832837 | orchestrator | Saturday 13 September 2025 00:52:45 +0000 (0:00:02.855) 0:02:05.684 **** 2025-09-13 00:53:10.832846 | orchestrator | 2025-09-13 00:53:10.832856 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-13 00:53:10.832865 | orchestrator | Saturday 13 September 2025 00:52:45 +0000 (0:00:00.068) 0:02:05.753 **** 2025-09-13 00:53:10.832875 | orchestrator | 2025-09-13 00:53:10.832884 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-13 00:53:10.832894 | orchestrator | Saturday 13 September 2025 00:52:45 +0000 (0:00:00.073) 0:02:05.827 **** 2025-09-13 00:53:10.832903 | orchestrator | 2025-09-13 00:53:10.832913 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-13 00:53:10.832922 | orchestrator | Saturday 13 September 2025 00:52:45 +0000 (0:00:00.065) 0:02:05.893 **** 2025-09-13 00:53:10.832932 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:53:10.832941 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:53:10.832951 | orchestrator | 2025-09-13 00:53:10.832965 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-13 00:53:10.832975 | orchestrator | Saturday 13 September 2025 00:52:51 +0000 (0:00:06.042) 0:02:11.935 **** 2025-09-13 00:53:10.832990 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:53:10.833000 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:53:10.833009 | orchestrator | 2025-09-13 00:53:10.833019 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-13 00:53:10.833028 | orchestrator | Saturday 13 September 2025 00:52:57 +0000 (0:00:06.145) 0:02:18.081 **** 2025-09-13 00:53:10.833038 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:53:10.833048 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:53:10.833057 | orchestrator | 2025-09-13 00:53:10.833067 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-13 00:53:10.833076 | orchestrator | Saturday 13 September 2025 00:53:04 +0000 (0:00:06.633) 0:02:24.714 **** 2025-09-13 00:53:10.833086 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:53:10.833095 | orchestrator | 2025-09-13 00:53:10.833105 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-13 00:53:10.833114 | orchestrator | Saturday 13 September 2025 00:53:04 +0000 (0:00:00.140) 0:02:24.855 **** 2025-09-13 00:53:10.833124 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:53:10.833133 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:53:10.833143 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:53:10.833152 | orchestrator | 2025-09-13 00:53:10.833162 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-13 00:53:10.833171 | orchestrator | Saturday 13 September 2025 00:53:05 +0000 (0:00:00.791) 0:02:25.646 **** 2025-09-13 00:53:10.833181 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:53:10.833227 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:53:10.833238 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:53:10.833248 | orchestrator | 2025-09-13 00:53:10.833258 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-13 00:53:10.833267 | orchestrator | Saturday 13 September 2025 00:53:05 +0000 (0:00:00.701) 0:02:26.347 **** 2025-09-13 00:53:10.833277 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:53:10.833286 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:53:10.833296 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:53:10.833306 | orchestrator | 2025-09-13 00:53:10.833315 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-13 00:53:10.833325 | orchestrator | Saturday 13 September 2025 00:53:06 +0000 (0:00:00.763) 0:02:27.110 **** 2025-09-13 00:53:10.833334 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:53:10.833344 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:53:10.833354 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:53:10.833363 | orchestrator | 2025-09-13 00:53:10.833373 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-13 00:53:10.833383 | orchestrator | Saturday 13 September 2025 00:53:07 +0000 (0:00:00.673) 0:02:27.784 **** 2025-09-13 00:53:10.833392 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:53:10.833402 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:53:10.833412 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:53:10.833421 | orchestrator | 2025-09-13 00:53:10.833431 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-13 00:53:10.833440 | orchestrator | Saturday 13 September 2025 00:53:08 +0000 (0:00:00.700) 0:02:28.484 **** 2025-09-13 00:53:10.833450 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:53:10.833459 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:53:10.833469 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:53:10.833478 | orchestrator | 2025-09-13 00:53:10.833488 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:53:10.833498 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-13 00:53:10.833508 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-13 00:53:10.833518 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-13 00:53:10.833533 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:53:10.833543 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:53:10.833552 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 00:53:10.833560 | orchestrator | 2025-09-13 00:53:10.833568 | orchestrator | 2025-09-13 00:53:10.833576 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:53:10.833584 | orchestrator | Saturday 13 September 2025 00:53:08 +0000 (0:00:00.874) 0:02:29.359 **** 2025-09-13 00:53:10.833592 | orchestrator | =============================================================================== 2025-09-13 00:53:10.833600 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 35.44s 2025-09-13 00:53:10.833607 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 17.85s 2025-09-13 00:53:10.833615 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.00s 2025-09-13 00:53:10.833623 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.43s 2025-09-13 00:53:10.833631 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.39s 2025-09-13 00:53:10.833661 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.05s 2025-09-13 00:53:10.833670 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.00s 2025-09-13 00:53:10.833683 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.86s 2025-09-13 00:53:10.833691 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.80s 2025-09-13 00:53:10.833699 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.29s 2025-09-13 00:53:10.833707 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.65s 2025-09-13 00:53:10.833715 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.48s 2025-09-13 00:53:10.833723 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.46s 2025-09-13 00:53:10.833730 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.43s 2025-09-13 00:53:10.833738 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.41s 2025-09-13 00:53:10.833746 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.31s 2025-09-13 00:53:10.833754 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.30s 2025-09-13 00:53:10.833762 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 1.23s 2025-09-13 00:53:10.833769 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.20s 2025-09-13 00:53:10.833777 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.17s 2025-09-13 00:53:10.833785 | orchestrator | 2025-09-13 00:53:10 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:53:10.833793 | orchestrator | 2025-09-13 00:53:10 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:53:10.833801 | orchestrator | 2025-09-13 00:53:10 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:53:13.869147 | orchestrator | 2025-09-13 00:53:13 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:53:13.869588 | orchestrator | 2025-09-13 00:53:13 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:53:13.869689 | orchestrator | 2025-09-13 00:53:13 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:53:16.906642 | orchestrator | 2025-09-13 00:53:16 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:53:16.908771 | orchestrator | 2025-09-13 00:53:16 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:53:16.908814 | orchestrator | 2025-09-13 00:53:16 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:53:19.959915 | orchestrator | 2025-09-13 00:53:19 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:53:19.960335 | orchestrator | 2025-09-13 00:53:19 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:53:19.960367 | orchestrator | 2025-09-13 00:53:19 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:53:23.041246 | orchestrator | 2025-09-13 00:53:23 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:53:23.041935 | orchestrator | 2025-09-13 00:53:23 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:53:23.041969 | orchestrator | 2025-09-13 00:53:23 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:53:26.085315 | orchestrator | 2025-09-13 00:53:26 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:53:26.085591 | orchestrator | 2025-09-13 00:53:26 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:53:26.085622 | orchestrator | 2025-09-13 00:53:26 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:53:29.115105 | orchestrator | 2025-09-13 00:53:29 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:53:29.116784 | orchestrator | 2025-09-13 00:53:29 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:53:29.116816 | orchestrator | 2025-09-13 00:53:29 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:53:32.152171 | orchestrator | 2025-09-13 00:53:32 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:53:32.152460 | orchestrator | 2025-09-13 00:53:32 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:53:32.152564 | orchestrator | 2025-09-13 00:53:32 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:53:35.188991 | orchestrator | 2025-09-13 00:53:35 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:53:35.191702 | orchestrator | 2025-09-13 00:53:35 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:53:35.191741 | orchestrator | 2025-09-13 00:53:35 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:53:38.240493 | orchestrator | 2025-09-13 00:53:38 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:53:38.244225 | orchestrator | 2025-09-13 00:53:38 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:53:38.244277 | orchestrator | 2025-09-13 00:53:38 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:53:41.301693 | orchestrator | 2025-09-13 00:53:41 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:53:41.301864 | orchestrator | 2025-09-13 00:53:41 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:53:41.302356 | orchestrator | 2025-09-13 00:53:41 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:53:44.341476 | orchestrator | 2025-09-13 00:53:44 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:53:44.341581 | orchestrator | 2025-09-13 00:53:44 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:53:44.341624 | orchestrator | 2025-09-13 00:53:44 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:53:47.377603 | orchestrator | 2025-09-13 00:53:47 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:53:47.378385 | orchestrator | 2025-09-13 00:53:47 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:53:47.378422 | orchestrator | 2025-09-13 00:53:47 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:53:50.420174 | orchestrator | 2025-09-13 00:53:50 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:53:50.420488 | orchestrator | 2025-09-13 00:53:50 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:53:50.420513 | orchestrator | 2025-09-13 00:53:50 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:53:53.460071 | orchestrator | 2025-09-13 00:53:53 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:53:53.463280 | orchestrator | 2025-09-13 00:53:53 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:53:53.463852 | orchestrator | 2025-09-13 00:53:53 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:53:56.507898 | orchestrator | 2025-09-13 00:53:56 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:53:56.511417 | orchestrator | 2025-09-13 00:53:56 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:53:56.511448 | orchestrator | 2025-09-13 00:53:56 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:53:59.560724 | orchestrator | 2025-09-13 00:53:59 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:53:59.563055 | orchestrator | 2025-09-13 00:53:59 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:53:59.563598 | orchestrator | 2025-09-13 00:53:59 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:54:02.613981 | orchestrator | 2025-09-13 00:54:02 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:54:02.615767 | orchestrator | 2025-09-13 00:54:02 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:54:02.616008 | orchestrator | 2025-09-13 00:54:02 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:54:05.660935 | orchestrator | 2025-09-13 00:54:05 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:54:05.661627 | orchestrator | 2025-09-13 00:54:05 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:54:05.661658 | orchestrator | 2025-09-13 00:54:05 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:54:08.710207 | orchestrator | 2025-09-13 00:54:08 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:54:08.710291 | orchestrator | 2025-09-13 00:54:08 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:54:08.710300 | orchestrator | 2025-09-13 00:54:08 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:54:11.758277 | orchestrator | 2025-09-13 00:54:11 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:54:11.758689 | orchestrator | 2025-09-13 00:54:11 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:54:11.758720 | orchestrator | 2025-09-13 00:54:11 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:54:14.804709 | orchestrator | 2025-09-13 00:54:14 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:54:14.807844 | orchestrator | 2025-09-13 00:54:14 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:54:14.807879 | orchestrator | 2025-09-13 00:54:14 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:54:17.847845 | orchestrator | 2025-09-13 00:54:17 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:54:17.849018 | orchestrator | 2025-09-13 00:54:17 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:54:17.849047 | orchestrator | 2025-09-13 00:54:17 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:54:20.897582 | orchestrator | 2025-09-13 00:54:20 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:54:20.899368 | orchestrator | 2025-09-13 00:54:20 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:54:20.899840 | orchestrator | 2025-09-13 00:54:20 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:54:23.942473 | orchestrator | 2025-09-13 00:54:23 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:54:23.942746 | orchestrator | 2025-09-13 00:54:23 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:54:23.942771 | orchestrator | 2025-09-13 00:54:23 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:54:26.997601 | orchestrator | 2025-09-13 00:54:26 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:54:27.001246 | orchestrator | 2025-09-13 00:54:27 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:54:27.005712 | orchestrator | 2025-09-13 00:54:27 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:54:30.049103 | orchestrator | 2025-09-13 00:54:30 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:54:30.053131 | orchestrator | 2025-09-13 00:54:30 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:54:30.053190 | orchestrator | 2025-09-13 00:54:30 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:54:33.093818 | orchestrator | 2025-09-13 00:54:33 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:54:33.095404 | orchestrator | 2025-09-13 00:54:33 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:54:33.095562 | orchestrator | 2025-09-13 00:54:33 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:54:36.131822 | orchestrator | 2025-09-13 00:54:36 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:54:36.132779 | orchestrator | 2025-09-13 00:54:36 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:54:36.132837 | orchestrator | 2025-09-13 00:54:36 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:54:39.176713 | orchestrator | 2025-09-13 00:54:39 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:54:39.177017 | orchestrator | 2025-09-13 00:54:39 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:54:39.177041 | orchestrator | 2025-09-13 00:54:39 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:54:42.220774 | orchestrator | 2025-09-13 00:54:42 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:54:42.223408 | orchestrator | 2025-09-13 00:54:42 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:54:42.223504 | orchestrator | 2025-09-13 00:54:42 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:54:45.261901 | orchestrator | 2025-09-13 00:54:45 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:54:45.263584 | orchestrator | 2025-09-13 00:54:45 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:54:45.264491 | orchestrator | 2025-09-13 00:54:45 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:54:48.310759 | orchestrator | 2025-09-13 00:54:48 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:54:48.311865 | orchestrator | 2025-09-13 00:54:48 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:54:48.311888 | orchestrator | 2025-09-13 00:54:48 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:54:51.357613 | orchestrator | 2025-09-13 00:54:51 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:54:51.359482 | orchestrator | 2025-09-13 00:54:51 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:54:51.359510 | orchestrator | 2025-09-13 00:54:51 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:54:54.404062 | orchestrator | 2025-09-13 00:54:54 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:54:54.406359 | orchestrator | 2025-09-13 00:54:54 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:54:54.406788 | orchestrator | 2025-09-13 00:54:54 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:54:57.450259 | orchestrator | 2025-09-13 00:54:57 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:54:57.450901 | orchestrator | 2025-09-13 00:54:57 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:54:57.451539 | orchestrator | 2025-09-13 00:54:57 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:55:00.496536 | orchestrator | 2025-09-13 00:55:00 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:55:00.497610 | orchestrator | 2025-09-13 00:55:00 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:55:00.497785 | orchestrator | 2025-09-13 00:55:00 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:55:03.530797 | orchestrator | 2025-09-13 00:55:03 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:55:03.531534 | orchestrator | 2025-09-13 00:55:03 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:55:03.531548 | orchestrator | 2025-09-13 00:55:03 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:55:06.568593 | orchestrator | 2025-09-13 00:55:06 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:55:06.568675 | orchestrator | 2025-09-13 00:55:06 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:55:06.568689 | orchestrator | 2025-09-13 00:55:06 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:55:09.617697 | orchestrator | 2025-09-13 00:55:09 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:55:09.618010 | orchestrator | 2025-09-13 00:55:09 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:55:09.618131 | orchestrator | 2025-09-13 00:55:09 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:55:12.659713 | orchestrator | 2025-09-13 00:55:12 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:55:12.661397 | orchestrator | 2025-09-13 00:55:12 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:55:12.661457 | orchestrator | 2025-09-13 00:55:12 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:55:15.695739 | orchestrator | 2025-09-13 00:55:15 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:55:15.695932 | orchestrator | 2025-09-13 00:55:15 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:55:15.696620 | orchestrator | 2025-09-13 00:55:15 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:55:18.737862 | orchestrator | 2025-09-13 00:55:18 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:55:18.739720 | orchestrator | 2025-09-13 00:55:18 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:55:18.740060 | orchestrator | 2025-09-13 00:55:18 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:55:21.779570 | orchestrator | 2025-09-13 00:55:21 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:55:21.779741 | orchestrator | 2025-09-13 00:55:21 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:55:21.780104 | orchestrator | 2025-09-13 00:55:21 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:55:24.816927 | orchestrator | 2025-09-13 00:55:24 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:55:24.819410 | orchestrator | 2025-09-13 00:55:24 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:55:24.819446 | orchestrator | 2025-09-13 00:55:24 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:55:27.866610 | orchestrator | 2025-09-13 00:55:27 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:55:27.868699 | orchestrator | 2025-09-13 00:55:27 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:55:27.868731 | orchestrator | 2025-09-13 00:55:27 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:55:30.918543 | orchestrator | 2025-09-13 00:55:30 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:55:30.922631 | orchestrator | 2025-09-13 00:55:30 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:55:30.922673 | orchestrator | 2025-09-13 00:55:30 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:55:33.975307 | orchestrator | 2025-09-13 00:55:33 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:55:33.975410 | orchestrator | 2025-09-13 00:55:33 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:55:33.975424 | orchestrator | 2025-09-13 00:55:33 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:55:37.014956 | orchestrator | 2025-09-13 00:55:37 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:55:37.018376 | orchestrator | 2025-09-13 00:55:37 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:55:37.018412 | orchestrator | 2025-09-13 00:55:37 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:55:40.060957 | orchestrator | 2025-09-13 00:55:40 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:55:40.063311 | orchestrator | 2025-09-13 00:55:40 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state STARTED 2025-09-13 00:55:40.063353 | orchestrator | 2025-09-13 00:55:40 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:55:43.106389 | orchestrator | 2025-09-13 00:55:43 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:55:43.110193 | orchestrator | 2025-09-13 00:55:43 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:55:43.113807 | orchestrator | 2025-09-13 00:55:43 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:55:43.127229 | orchestrator | 2025-09-13 00:55:43 | INFO  | Task 266d6daa-0e42-4d8b-8e10-1c48c5531c12 is in state SUCCESS 2025-09-13 00:55:43.130450 | orchestrator | 2025-09-13 00:55:43.130508 | orchestrator | 2025-09-13 00:55:43.130529 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-13 00:55:43.130551 | orchestrator | 2025-09-13 00:55:43.130570 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-13 00:55:43.130590 | orchestrator | Saturday 13 September 2025 00:49:17 +0000 (0:00:00.677) 0:00:00.677 **** 2025-09-13 00:55:43.130610 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:55:43.130630 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:55:43.130649 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:55:43.130668 | orchestrator | 2025-09-13 00:55:43.130687 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-13 00:55:43.130707 | orchestrator | Saturday 13 September 2025 00:49:17 +0000 (0:00:00.621) 0:00:01.299 **** 2025-09-13 00:55:43.130726 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-09-13 00:55:43.130747 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-09-13 00:55:43.130777 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-09-13 00:55:43.130798 | orchestrator | 2025-09-13 00:55:43.130817 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-09-13 00:55:43.130835 | orchestrator | 2025-09-13 00:55:43.130851 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-13 00:55:43.130863 | orchestrator | Saturday 13 September 2025 00:49:19 +0000 (0:00:01.107) 0:00:02.407 **** 2025-09-13 00:55:43.130874 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:55:43.130885 | orchestrator | 2025-09-13 00:55:43.130896 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-09-13 00:55:43.130907 | orchestrator | Saturday 13 September 2025 00:49:20 +0000 (0:00:01.521) 0:00:03.928 **** 2025-09-13 00:55:43.130919 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:55:43.130929 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:55:43.130940 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:55:43.130951 | orchestrator | 2025-09-13 00:55:43.130961 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-13 00:55:43.130972 | orchestrator | Saturday 13 September 2025 00:49:21 +0000 (0:00:01.095) 0:00:05.023 **** 2025-09-13 00:55:43.130983 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:55:43.130993 | orchestrator | 2025-09-13 00:55:43.131004 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-09-13 00:55:43.131015 | orchestrator | Saturday 13 September 2025 00:49:24 +0000 (0:00:02.576) 0:00:07.600 **** 2025-09-13 00:55:43.131027 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:55:43.131039 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:55:43.131051 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:55:43.131063 | orchestrator | 2025-09-13 00:55:43.131075 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-09-13 00:55:43.131111 | orchestrator | Saturday 13 September 2025 00:49:25 +0000 (0:00:00.774) 0:00:08.374 **** 2025-09-13 00:55:43.131126 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-13 00:55:43.131138 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-13 00:55:43.131150 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-13 00:55:43.131162 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-13 00:55:43.131192 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-13 00:55:43.131205 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-13 00:55:43.131217 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-13 00:55:43.131230 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-13 00:55:43.131243 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-13 00:55:43.131256 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-13 00:55:43.131268 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-13 00:55:43.131280 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-13 00:55:43.131292 | orchestrator | 2025-09-13 00:55:43.131304 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-13 00:55:43.131317 | orchestrator | Saturday 13 September 2025 00:49:28 +0000 (0:00:03.213) 0:00:11.587 **** 2025-09-13 00:55:43.131329 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-13 00:55:43.131342 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-13 00:55:43.131354 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-13 00:55:43.131367 | orchestrator | 2025-09-13 00:55:43.131380 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-13 00:55:43.131391 | orchestrator | Saturday 13 September 2025 00:49:29 +0000 (0:00:01.135) 0:00:12.723 **** 2025-09-13 00:55:43.131401 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-13 00:55:43.131412 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-13 00:55:43.131423 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-13 00:55:43.131434 | orchestrator | 2025-09-13 00:55:43.131445 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-13 00:55:43.131455 | orchestrator | Saturday 13 September 2025 00:49:31 +0000 (0:00:01.931) 0:00:14.655 **** 2025-09-13 00:55:43.131466 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-09-13 00:55:43.131477 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.131501 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-09-13 00:55:43.131512 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.131523 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-09-13 00:55:43.131534 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.131545 | orchestrator | 2025-09-13 00:55:43.131556 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-09-13 00:55:43.131566 | orchestrator | Saturday 13 September 2025 00:49:32 +0000 (0:00:00.843) 0:00:15.498 **** 2025-09-13 00:55:43.131588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-13 00:55:43.131616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-13 00:55:43.131645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-13 00:55:43.131658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-13 00:55:43.131679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-13 00:55:43.131700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-13 00:55:43.131731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-13 00:55:43.131759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-13 00:55:43.131780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-13 00:55:43.131811 | orchestrator | 2025-09-13 00:55:43.131832 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-09-13 00:55:43.131852 | orchestrator | Saturday 13 September 2025 00:49:35 +0000 (0:00:03.448) 0:00:18.947 **** 2025-09-13 00:55:43.131873 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.131893 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.131913 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.131932 | orchestrator | 2025-09-13 00:55:43.131953 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-09-13 00:55:43.131973 | orchestrator | Saturday 13 September 2025 00:49:37 +0000 (0:00:01.516) 0:00:20.463 **** 2025-09-13 00:55:43.131993 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-09-13 00:55:43.132011 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-09-13 00:55:43.132030 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-09-13 00:55:43.132050 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-09-13 00:55:43.132068 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-09-13 00:55:43.132130 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-09-13 00:55:43.132151 | orchestrator | 2025-09-13 00:55:43.132170 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-09-13 00:55:43.132189 | orchestrator | Saturday 13 September 2025 00:49:40 +0000 (0:00:03.434) 0:00:23.898 **** 2025-09-13 00:55:43.132208 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.132227 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.132246 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.132265 | orchestrator | 2025-09-13 00:55:43.132283 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-09-13 00:55:43.132302 | orchestrator | Saturday 13 September 2025 00:49:42 +0000 (0:00:01.941) 0:00:25.839 **** 2025-09-13 00:55:43.132323 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:55:43.132343 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:55:43.132363 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:55:43.132383 | orchestrator | 2025-09-13 00:55:43.132403 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-09-13 00:55:43.132423 | orchestrator | Saturday 13 September 2025 00:49:44 +0000 (0:00:02.360) 0:00:28.200 **** 2025-09-13 00:55:43.132443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-13 00:55:43.132478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-13 00:55:43.132508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-13 00:55:43.132544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__48a46ef8065e4bbb671b8edb096aaeae0eed2d4a', '__omit_place_holder__48a46ef8065e4bbb671b8edb096aaeae0eed2d4a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-13 00:55:43.132565 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.132587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-13 00:55:43.132607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-13 00:55:43.132628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-13 00:55:43.132646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__48a46ef8065e4bbb671b8edb096aaeae0eed2d4a', '__omit_place_holder__48a46ef8065e4bbb671b8edb096aaeae0eed2d4a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-13 00:55:43.132657 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.132677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-13 00:55:43.132705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-13 00:55:43.132717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-13 00:55:43.132729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__48a46ef8065e4bbb671b8edb096aaeae0eed2d4a', '__omit_place_holder__48a46ef8065e4bbb671b8edb096aaeae0eed2d4a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-13 00:55:43.132740 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.132751 | orchestrator | 2025-09-13 00:55:43.132761 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-09-13 00:55:43.132772 | orchestrator | Saturday 13 September 2025 00:49:46 +0000 (0:00:01.438) 0:00:29.638 **** 2025-09-13 00:55:43.132784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-13 00:55:43.132795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-13 00:55:43.132815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-13 00:55:43.132839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-13 00:55:43.132851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-13 00:55:43.132862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__48a46ef8065e4bbb671b8edb096aaeae0eed2d4a', '__omit_place_holder__48a46ef8065e4bbb671b8edb096aaeae0eed2d4a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-13 00:55:43.132874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-13 00:55:43.132885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-13 00:55:43.132896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__48a46ef8065e4bbb671b8edb096aaeae0eed2d4a', '__omit_place_holder__48a46ef8065e4bbb671b8edb096aaeae0eed2d4a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-13 00:55:43.132920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-13 00:55:43.132936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-13 00:55:43.132948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__48a46ef8065e4bbb671b8edb096aaeae0eed2d4a', '__omit_place_holder__48a46ef8065e4bbb671b8edb096aaeae0eed2d4a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-13 00:55:43.132960 | orchestrator | 2025-09-13 00:55:43.132971 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-09-13 00:55:43.132981 | orchestrator | Saturday 13 September 2025 00:49:50 +0000 (0:00:04.463) 0:00:34.101 **** 2025-09-13 00:55:43.132993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-13 00:55:43.133004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-13 00:55:43.133016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-13 00:55:43.133046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-13 00:55:43.133063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-13 00:55:43.133075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-13 00:55:43.133086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-13 00:55:43.133153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-13 00:55:43.133165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-13 00:55:43.133177 | orchestrator | 2025-09-13 00:55:43.133187 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-09-13 00:55:43.133206 | orchestrator | Saturday 13 September 2025 00:49:54 +0000 (0:00:03.504) 0:00:37.606 **** 2025-09-13 00:55:43.133217 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-13 00:55:43.133228 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-13 00:55:43.133239 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-13 00:55:43.133250 | orchestrator | 2025-09-13 00:55:43.133261 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-09-13 00:55:43.133271 | orchestrator | Saturday 13 September 2025 00:49:57 +0000 (0:00:03.029) 0:00:40.636 **** 2025-09-13 00:55:43.133282 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-13 00:55:43.133293 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-13 00:55:43.133304 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-13 00:55:43.133315 | orchestrator | 2025-09-13 00:55:43.135972 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-09-13 00:55:43.136013 | orchestrator | Saturday 13 September 2025 00:50:04 +0000 (0:00:07.664) 0:00:48.300 **** 2025-09-13 00:55:43.136024 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.136036 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.136046 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.136057 | orchestrator | 2025-09-13 00:55:43.136068 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-09-13 00:55:43.136078 | orchestrator | Saturday 13 September 2025 00:50:05 +0000 (0:00:00.604) 0:00:48.904 **** 2025-09-13 00:55:43.136117 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-13 00:55:43.136137 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-13 00:55:43.136148 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-13 00:55:43.136159 | orchestrator | 2025-09-13 00:55:43.136169 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-09-13 00:55:43.136180 | orchestrator | Saturday 13 September 2025 00:50:08 +0000 (0:00:03.036) 0:00:51.941 **** 2025-09-13 00:55:43.136191 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-13 00:55:43.136209 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-13 00:55:43.136227 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-13 00:55:43.136246 | orchestrator | 2025-09-13 00:55:43.136264 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-09-13 00:55:43.136283 | orchestrator | Saturday 13 September 2025 00:50:10 +0000 (0:00:02.145) 0:00:54.086 **** 2025-09-13 00:55:43.136303 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-09-13 00:55:43.136323 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-09-13 00:55:43.136343 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-09-13 00:55:43.136363 | orchestrator | 2025-09-13 00:55:43.136383 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-09-13 00:55:43.136402 | orchestrator | Saturday 13 September 2025 00:50:13 +0000 (0:00:02.293) 0:00:56.379 **** 2025-09-13 00:55:43.136422 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-09-13 00:55:43.136441 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-09-13 00:55:43.136461 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-09-13 00:55:43.136497 | orchestrator | 2025-09-13 00:55:43.136517 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-13 00:55:43.136536 | orchestrator | Saturday 13 September 2025 00:50:15 +0000 (0:00:02.364) 0:00:58.744 **** 2025-09-13 00:55:43.136557 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:55:43.136578 | orchestrator | 2025-09-13 00:55:43.136598 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-09-13 00:55:43.136623 | orchestrator | Saturday 13 September 2025 00:50:16 +0000 (0:00:00.999) 0:00:59.744 **** 2025-09-13 00:55:43.136651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-13 00:55:43.136677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-13 00:55:43.136714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-13 00:55:43.136743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-13 00:55:43.136764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-13 00:55:43.136784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-13 00:55:43.136816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-13 00:55:43.136837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-13 00:55:43.136857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-13 00:55:43.136878 | orchestrator | 2025-09-13 00:55:43.136897 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-09-13 00:55:43.136916 | orchestrator | Saturday 13 September 2025 00:50:20 +0000 (0:00:04.605) 0:01:04.349 **** 2025-09-13 00:55:43.136947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-13 00:55:43.136982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-13 00:55:43.137004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-13 00:55:43.137035 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.137056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-13 00:55:43.137077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-13 00:55:43.137119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-13 00:55:43.137139 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.137160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-13 00:55:43.137190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-13 00:55:43.137219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-13 00:55:43.137252 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.137273 | orchestrator | 2025-09-13 00:55:43.137292 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-09-13 00:55:43.137312 | orchestrator | Saturday 13 September 2025 00:50:22 +0000 (0:00:01.524) 0:01:05.874 **** 2025-09-13 00:55:43.137333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-13 00:55:43.137355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-13 00:55:43.137374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-13 00:55:43.137392 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.137410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-13 00:55:43.137439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-13 00:55:43.137460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-13 00:55:43.137487 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.137499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-13 00:55:43.137510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-13 00:55:43.137546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-13 00:55:43.137558 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.137569 | orchestrator | 2025-09-13 00:55:43.137580 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-13 00:55:43.137591 | orchestrator | Saturday 13 September 2025 00:50:23 +0000 (0:00:01.167) 0:01:07.041 **** 2025-09-13 00:55:43.137602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-13 00:55:43.137622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-13 00:55:43.137633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-13 00:55:43.137651 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.137666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-13 00:55:43.137678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-13 00:55:43.137690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-13 00:55:43.137701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-13 00:55:43.137712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-13 00:55:43.137729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-13 00:55:43.137741 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.137752 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.137763 | orchestrator | 2025-09-13 00:55:43.137773 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-13 00:55:43.137790 | orchestrator | Saturday 13 September 2025 00:50:24 +0000 (0:00:01.006) 0:01:08.048 **** 2025-09-13 00:55:43.137806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-13 00:55:43.137818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-13 00:55:43.137829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-13 00:55:43.137841 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.137852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-13 00:55:43.137863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-13 00:55:43.137874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-13 00:55:43.137886 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.137903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-13 00:55:43.137926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-13 00:55:43.137937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-13 00:55:43.137949 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.137960 | orchestrator | 2025-09-13 00:55:43.137971 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-13 00:55:43.137982 | orchestrator | Saturday 13 September 2025 00:50:25 +0000 (0:00:00.700) 0:01:08.749 **** 2025-09-13 00:55:43.137993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-13 00:55:43.138005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-13 00:55:43.138064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-13 00:55:43.138079 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.138251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-13 00:55:43.138305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-13 00:55:43.138318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-13 00:55:43.138330 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.138341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-13 00:55:43.138353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-13 00:55:43.138364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-13 00:55:43.138375 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.138386 | orchestrator | 2025-09-13 00:55:43.138397 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-09-13 00:55:43.138408 | orchestrator | Saturday 13 September 2025 00:50:26 +0000 (0:00:01.104) 0:01:09.853 **** 2025-09-13 00:55:43.138419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-13 00:55:43.138442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-13 00:55:43.138461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-13 00:55:43.138472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-13 00:55:43.138482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-13 00:55:43.138491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-13 00:55:43.138501 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.138511 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.138521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-13 00:55:43.138542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-13 00:55:43.138552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-13 00:55:43.138562 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.138572 | orchestrator | 2025-09-13 00:55:43.138586 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-09-13 00:55:43.138596 | orchestrator | Saturday 13 September 2025 00:50:28 +0000 (0:00:01.911) 0:01:11.765 **** 2025-09-13 00:55:43.138606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-13 00:55:43.138616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-13 00:55:43.138626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-13 00:55:43.138636 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.138646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-13 00:55:43.138662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-13 00:55:43.138679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-13 00:55:43.138690 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.138704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-13 00:55:43.138714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-13 00:55:43.138724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-13 00:55:43.138734 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.138744 | orchestrator | 2025-09-13 00:55:43.138754 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-09-13 00:55:43.138763 | orchestrator | Saturday 13 September 2025 00:50:30 +0000 (0:00:01.685) 0:01:13.451 **** 2025-09-13 00:55:43.138773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-13 00:55:43.138788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-13 00:55:43.138799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-13 00:55:43.138809 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.138825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-13 00:55:43.138840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-13 00:55:43.138851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-13 00:55:43.138861 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.138870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-13 00:55:43.138892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-13 00:55:43.138909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-13 00:55:43.138926 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.138942 | orchestrator | 2025-09-13 00:55:43.138952 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-09-13 00:55:43.138962 | orchestrator | Saturday 13 September 2025 00:50:30 +0000 (0:00:00.747) 0:01:14.198 **** 2025-09-13 00:55:43.138972 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-13 00:55:43.138982 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-13 00:55:43.138997 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-13 00:55:43.139007 | orchestrator | 2025-09-13 00:55:43.139017 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-09-13 00:55:43.139027 | orchestrator | Saturday 13 September 2025 00:50:32 +0000 (0:00:01.609) 0:01:15.808 **** 2025-09-13 00:55:43.139037 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-13 00:55:43.139046 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-13 00:55:43.139056 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-13 00:55:43.139065 | orchestrator | 2025-09-13 00:55:43.139075 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-09-13 00:55:43.139117 | orchestrator | Saturday 13 September 2025 00:50:33 +0000 (0:00:01.420) 0:01:17.228 **** 2025-09-13 00:55:43.139129 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-13 00:55:43.139139 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-13 00:55:43.139148 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-13 00:55:43.139158 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-13 00:55:43.139168 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.139177 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-13 00:55:43.139187 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.139197 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-13 00:55:43.139207 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.139223 | orchestrator | 2025-09-13 00:55:43.139233 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-09-13 00:55:43.139242 | orchestrator | Saturday 13 September 2025 00:50:34 +0000 (0:00:00.901) 0:01:18.130 **** 2025-09-13 00:55:43.139252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-13 00:55:43.139263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-13 00:55:43.139273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-13 00:55:43.139289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-13 00:55:43.139304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-13 00:55:43.139315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-13 00:55:43.139331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-13 00:55:43.139341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-13 00:55:43.139351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-13 00:55:43.139361 | orchestrator | 2025-09-13 00:55:43.139371 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-09-13 00:55:43.139381 | orchestrator | Saturday 13 September 2025 00:50:37 +0000 (0:00:02.495) 0:01:20.625 **** 2025-09-13 00:55:43.139390 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:55:43.139400 | orchestrator | 2025-09-13 00:55:43.139410 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-09-13 00:55:43.139419 | orchestrator | Saturday 13 September 2025 00:50:37 +0000 (0:00:00.536) 0:01:21.161 **** 2025-09-13 00:55:43.139430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-13 00:55:43.139447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-13 00:55:43.139462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.139478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.139489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-13 00:55:43.139499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-13 00:55:43.139509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.143989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.144066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-13 00:55:43.144121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-13 00:55:43.144133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.144143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.144154 | orchestrator | 2025-09-13 00:55:43.144166 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-09-13 00:55:43.144177 | orchestrator | Saturday 13 September 2025 00:50:42 +0000 (0:00:04.241) 0:01:25.403 **** 2025-09-13 00:55:43.144188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-13 00:55:43.144213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-13 00:55:43.144228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.144244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.144255 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.144266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-13 00:55:43.144276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-13 00:55:43.144286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.144297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.144307 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.144329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-13 00:55:43.144345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-13 00:55:43.144355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.144365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.144376 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.144385 | orchestrator | 2025-09-13 00:55:43.144395 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-09-13 00:55:43.144406 | orchestrator | Saturday 13 September 2025 00:50:42 +0000 (0:00:00.926) 0:01:26.329 **** 2025-09-13 00:55:43.144416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-13 00:55:43.144427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-13 00:55:43.144439 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.144448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-13 00:55:43.144458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-13 00:55:43.144468 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.144478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-13 00:55:43.144493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-13 00:55:43.144503 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.144513 | orchestrator | 2025-09-13 00:55:43.144527 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-09-13 00:55:43.144537 | orchestrator | Saturday 13 September 2025 00:50:44 +0000 (0:00:01.261) 0:01:27.590 **** 2025-09-13 00:55:43.144547 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.144556 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.144566 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.144575 | orchestrator | 2025-09-13 00:55:43.144585 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-09-13 00:55:43.144594 | orchestrator | Saturday 13 September 2025 00:50:45 +0000 (0:00:01.554) 0:01:29.145 **** 2025-09-13 00:55:43.144604 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.144613 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.144623 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.144633 | orchestrator | 2025-09-13 00:55:43.144642 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-09-13 00:55:43.144655 | orchestrator | Saturday 13 September 2025 00:50:47 +0000 (0:00:02.186) 0:01:31.332 **** 2025-09-13 00:55:43.144665 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:55:43.144674 | orchestrator | 2025-09-13 00:55:43.144684 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-09-13 00:55:43.144694 | orchestrator | Saturday 13 September 2025 00:50:48 +0000 (0:00:00.947) 0:01:32.279 **** 2025-09-13 00:55:43.144705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-13 00:55:43.144715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-13 00:55:43.144726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.144743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.144759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.144773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.144784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-13 00:55:43.144794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.144804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.144819 | orchestrator | 2025-09-13 00:55:43.144830 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-09-13 00:55:43.144839 | orchestrator | Saturday 13 September 2025 00:50:52 +0000 (0:00:03.722) 0:01:36.002 **** 2025-09-13 00:55:43.144855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-13 00:55:43.144870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.144880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-13 00:55:43.144891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.144901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.144916 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.144926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.144936 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.144951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-13 00:55:43.144966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.144976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.144986 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.144996 | orchestrator | 2025-09-13 00:55:43.145006 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-09-13 00:55:43.145016 | orchestrator | Saturday 13 September 2025 00:50:54 +0000 (0:00:01.834) 0:01:37.837 **** 2025-09-13 00:55:43.145026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-13 00:55:43.145036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-13 00:55:43.145056 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.145066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-13 00:55:43.145076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-13 00:55:43.145086 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.145156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-13 00:55:43.145166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-13 00:55:43.145176 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.145185 | orchestrator | 2025-09-13 00:55:43.145195 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-09-13 00:55:43.145205 | orchestrator | Saturday 13 September 2025 00:50:55 +0000 (0:00:01.158) 0:01:38.995 **** 2025-09-13 00:55:43.145214 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.145224 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.145234 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.145243 | orchestrator | 2025-09-13 00:55:43.145253 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-09-13 00:55:43.145262 | orchestrator | Saturday 13 September 2025 00:50:56 +0000 (0:00:01.345) 0:01:40.341 **** 2025-09-13 00:55:43.145272 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.145281 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.145291 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.145300 | orchestrator | 2025-09-13 00:55:43.145315 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-09-13 00:55:43.145325 | orchestrator | Saturday 13 September 2025 00:50:59 +0000 (0:00:02.082) 0:01:42.423 **** 2025-09-13 00:55:43.145335 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.145345 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.145354 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.145364 | orchestrator | 2025-09-13 00:55:43.145374 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-09-13 00:55:43.145384 | orchestrator | Saturday 13 September 2025 00:50:59 +0000 (0:00:00.302) 0:01:42.726 **** 2025-09-13 00:55:43.145393 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:55:43.145403 | orchestrator | 2025-09-13 00:55:43.145413 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-09-13 00:55:43.145423 | orchestrator | Saturday 13 September 2025 00:51:00 +0000 (0:00:00.844) 0:01:43.570 **** 2025-09-13 00:55:43.145433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-13 00:55:43.145451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-13 00:55:43.145461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-13 00:55:43.145471 | orchestrator | 2025-09-13 00:55:43.145481 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-09-13 00:55:43.145490 | orchestrator | Saturday 13 September 2025 00:51:02 +0000 (0:00:02.552) 0:01:46.122 **** 2025-09-13 00:55:43.145525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-13 00:55:43.145537 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.145550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-13 00:55:43.145560 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.145570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-13 00:55:43.145590 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.145599 | orchestrator | 2025-09-13 00:55:43.145609 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-09-13 00:55:43.145619 | orchestrator | Saturday 13 September 2025 00:51:04 +0000 (0:00:01.494) 0:01:47.617 **** 2025-09-13 00:55:43.145629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-13 00:55:43.145641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-13 00:55:43.145651 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.145661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-13 00:55:43.145672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-13 00:55:43.145682 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.145696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-13 00:55:43.145707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-13 00:55:43.145717 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.145726 | orchestrator | 2025-09-13 00:55:43.145740 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-09-13 00:55:43.145750 | orchestrator | Saturday 13 September 2025 00:51:05 +0000 (0:00:01.645) 0:01:49.263 **** 2025-09-13 00:55:43.145765 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.145775 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.145784 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.145794 | orchestrator | 2025-09-13 00:55:43.145803 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-09-13 00:55:43.145813 | orchestrator | Saturday 13 September 2025 00:51:06 +0000 (0:00:00.696) 0:01:49.959 **** 2025-09-13 00:55:43.145822 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.145832 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.145841 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.145851 | orchestrator | 2025-09-13 00:55:43.145860 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-09-13 00:55:43.145870 | orchestrator | Saturday 13 September 2025 00:51:07 +0000 (0:00:01.189) 0:01:51.148 **** 2025-09-13 00:55:43.145879 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:55:43.145889 | orchestrator | 2025-09-13 00:55:43.145898 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-09-13 00:55:43.145908 | orchestrator | Saturday 13 September 2025 00:51:08 +0000 (0:00:00.726) 0:01:51.875 **** 2025-09-13 00:55:43.145918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-13 00:55:43.145929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-13 00:55:43.145939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.145956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.145975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.145985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.145996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.146007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.146064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-13 00:55:43.146103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.146115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.146125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.146135 | orchestrator | 2025-09-13 00:55:43.146145 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-09-13 00:55:43.146155 | orchestrator | Saturday 13 September 2025 00:51:11 +0000 (0:00:03.402) 0:01:55.278 **** 2025-09-13 00:55:43.146165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-13 00:55:43.146176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.146200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.146211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-13 00:55:43.146221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.146231 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.146241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.146252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.146272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.146283 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.146297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-13 00:55:43.146307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.146317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.146327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.146346 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.146356 | orchestrator | 2025-09-13 00:55:43.146365 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-09-13 00:55:43.146375 | orchestrator | Saturday 13 September 2025 00:51:12 +0000 (0:00:00.958) 0:01:56.237 **** 2025-09-13 00:55:43.146385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-13 00:55:43.146408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-13 00:55:43.146419 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.146429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-13 00:55:43.146439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-13 00:55:43.146448 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.146461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-13 00:55:43.146472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-13 00:55:43.146482 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.146492 | orchestrator | 2025-09-13 00:55:43.146501 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-09-13 00:55:43.146511 | orchestrator | Saturday 13 September 2025 00:51:13 +0000 (0:00:00.982) 0:01:57.220 **** 2025-09-13 00:55:43.146521 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.146530 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.146540 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.146549 | orchestrator | 2025-09-13 00:55:43.146559 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-09-13 00:55:43.146569 | orchestrator | Saturday 13 September 2025 00:51:15 +0000 (0:00:01.355) 0:01:58.575 **** 2025-09-13 00:55:43.146578 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.146588 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.146597 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.146607 | orchestrator | 2025-09-13 00:55:43.146617 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-09-13 00:55:43.146627 | orchestrator | Saturday 13 September 2025 00:51:17 +0000 (0:00:02.269) 0:02:00.844 **** 2025-09-13 00:55:43.146636 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.146646 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.146656 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.146665 | orchestrator | 2025-09-13 00:55:43.146675 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-09-13 00:55:43.146685 | orchestrator | Saturday 13 September 2025 00:51:18 +0000 (0:00:00.535) 0:02:01.380 **** 2025-09-13 00:55:43.146694 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.146704 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.146713 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.146723 | orchestrator | 2025-09-13 00:55:43.146733 | orchestrator | TASK [include_role : designate] ************************************************ 2025-09-13 00:55:43.146748 | orchestrator | Saturday 13 September 2025 00:51:18 +0000 (0:00:00.317) 0:02:01.697 **** 2025-09-13 00:55:43.146757 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:55:43.146767 | orchestrator | 2025-09-13 00:55:43.146777 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-09-13 00:55:43.146786 | orchestrator | Saturday 13 September 2025 00:51:19 +0000 (0:00:00.795) 0:02:02.493 **** 2025-09-13 00:55:43.146796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-13 00:55:43.146812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-13 00:55:43.146822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.146837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.146848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.146858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.146907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.146918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-13 00:55:43.146933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-13 00:55:43.146948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.146958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.146968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.146983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.146993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.147009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-13 00:55:43.147023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-13 00:55:43.147034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.147044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.147059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.147069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.147079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.147107 | orchestrator | 2025-09-13 00:55:43.147117 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-09-13 00:55:43.147127 | orchestrator | Saturday 13 September 2025 00:51:23 +0000 (0:00:04.434) 0:02:06.928 **** 2025-09-13 00:55:43.147147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-13 00:55:43.147158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-13 00:55:43.147168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.147184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.147194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.147204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.147220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-13 00:55:43.147234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.147244 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.147255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-13 00:55:43.147270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.147280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.147290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.147305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.147315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.147325 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.147339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-13 00:55:43.147355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-13 00:55:43.147365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.147375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.147385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.147400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.147417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.147433 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.147442 | orchestrator | 2025-09-13 00:55:43.147452 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-09-13 00:55:43.147462 | orchestrator | Saturday 13 September 2025 00:51:24 +0000 (0:00:00.847) 0:02:07.775 **** 2025-09-13 00:55:43.147472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-13 00:55:43.147482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-13 00:55:43.147492 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.147502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-13 00:55:43.147512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-13 00:55:43.147521 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.147531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-13 00:55:43.147540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-13 00:55:43.147550 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.147560 | orchestrator | 2025-09-13 00:55:43.147569 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-09-13 00:55:43.147579 | orchestrator | Saturday 13 September 2025 00:51:25 +0000 (0:00:00.981) 0:02:08.757 **** 2025-09-13 00:55:43.147589 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.147598 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.147608 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.147617 | orchestrator | 2025-09-13 00:55:43.147627 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-09-13 00:55:43.147637 | orchestrator | Saturday 13 September 2025 00:51:27 +0000 (0:00:01.806) 0:02:10.564 **** 2025-09-13 00:55:43.147646 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.147656 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.147666 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.147675 | orchestrator | 2025-09-13 00:55:43.147685 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-09-13 00:55:43.147694 | orchestrator | Saturday 13 September 2025 00:51:29 +0000 (0:00:01.870) 0:02:12.434 **** 2025-09-13 00:55:43.147704 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.147713 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.147723 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.147732 | orchestrator | 2025-09-13 00:55:43.147742 | orchestrator | TASK [include_role : glance] *************************************************** 2025-09-13 00:55:43.147752 | orchestrator | Saturday 13 September 2025 00:51:29 +0000 (0:00:00.524) 0:02:12.958 **** 2025-09-13 00:55:43.147761 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:55:43.147771 | orchestrator | 2025-09-13 00:55:43.147781 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-09-13 00:55:43.147790 | orchestrator | Saturday 13 September 2025 00:51:30 +0000 (0:00:00.786) 0:02:13.745 **** 2025-09-13 00:55:43.147814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-13 00:55:43.147832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-13 00:55:43.147854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-13 00:55:43.147871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-13 00:55:43.147889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-13 00:55:43.147909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-13 00:55:43.147920 | orchestrator | 2025-09-13 00:55:43.147930 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-09-13 00:55:43.147940 | orchestrator | Saturday 13 September 2025 00:51:34 +0000 (0:00:04.201) 0:02:17.947 **** 2025-09-13 00:55:43.147956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-13 00:55:43.147977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-13 00:55:43.147988 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.147999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-13 00:55:43.148026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-13 00:55:43.148037 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.148048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-13 00:55:43.148069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-13 00:55:43.148085 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.148142 | orchestrator | 2025-09-13 00:55:43.148152 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-09-13 00:55:43.148162 | orchestrator | Saturday 13 September 2025 00:51:37 +0000 (0:00:03.021) 0:02:20.969 **** 2025-09-13 00:55:43.148172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-13 00:55:43.148183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-13 00:55:43.148193 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.148203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-13 00:55:43.148213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-13 00:55:43.148229 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.148239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-13 00:55:43.148255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-13 00:55:43.148265 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.148275 | orchestrator | 2025-09-13 00:55:43.148284 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-09-13 00:55:43.148291 | orchestrator | Saturday 13 September 2025 00:51:41 +0000 (0:00:03.498) 0:02:24.467 **** 2025-09-13 00:55:43.148299 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.148307 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.148315 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.148323 | orchestrator | 2025-09-13 00:55:43.148336 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-09-13 00:55:43.148345 | orchestrator | Saturday 13 September 2025 00:51:42 +0000 (0:00:01.168) 0:02:25.636 **** 2025-09-13 00:55:43.148352 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.148360 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.148368 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.148376 | orchestrator | 2025-09-13 00:55:43.148383 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-09-13 00:55:43.148391 | orchestrator | Saturday 13 September 2025 00:51:44 +0000 (0:00:02.043) 0:02:27.679 **** 2025-09-13 00:55:43.148399 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.148407 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.148414 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.148422 | orchestrator | 2025-09-13 00:55:43.148430 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-09-13 00:55:43.148438 | orchestrator | Saturday 13 September 2025 00:51:44 +0000 (0:00:00.532) 0:02:28.212 **** 2025-09-13 00:55:43.148446 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:55:43.148453 | orchestrator | 2025-09-13 00:55:43.148461 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-09-13 00:55:43.148469 | orchestrator | Saturday 13 September 2025 00:51:45 +0000 (0:00:00.837) 0:02:29.049 **** 2025-09-13 00:55:43.148477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-13 00:55:43.148490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-13 00:55:43.148499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-13 00:55:43.148507 | orchestrator | 2025-09-13 00:55:43.148515 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-09-13 00:55:43.148523 | orchestrator | Saturday 13 September 2025 00:51:48 +0000 (0:00:03.075) 0:02:32.125 **** 2025-09-13 00:55:43.148536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-13 00:55:43.148545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-13 00:55:43.148553 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.148561 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.148569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-13 00:55:43.148577 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.148585 | orchestrator | 2025-09-13 00:55:43.148593 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-09-13 00:55:43.148605 | orchestrator | Saturday 13 September 2025 00:51:49 +0000 (0:00:00.578) 0:02:32.703 **** 2025-09-13 00:55:43.148613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-13 00:55:43.148621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-13 00:55:43.148629 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.148650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-13 00:55:43.148659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-13 00:55:43.148667 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.148674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-13 00:55:43.148683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-13 00:55:43.148691 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.148699 | orchestrator | 2025-09-13 00:55:43.148707 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-09-13 00:55:43.148714 | orchestrator | Saturday 13 September 2025 00:51:49 +0000 (0:00:00.616) 0:02:33.320 **** 2025-09-13 00:55:43.148722 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.148730 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.148738 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.148746 | orchestrator | 2025-09-13 00:55:43.148753 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-09-13 00:55:43.148761 | orchestrator | Saturday 13 September 2025 00:51:51 +0000 (0:00:01.157) 0:02:34.477 **** 2025-09-13 00:55:43.148769 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.148777 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.148785 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.148792 | orchestrator | 2025-09-13 00:55:43.148800 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-09-13 00:55:43.148808 | orchestrator | Saturday 13 September 2025 00:51:53 +0000 (0:00:01.936) 0:02:36.413 **** 2025-09-13 00:55:43.148816 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.148824 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.148965 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.148979 | orchestrator | 2025-09-13 00:55:43.148987 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-09-13 00:55:43.148995 | orchestrator | Saturday 13 September 2025 00:51:53 +0000 (0:00:00.640) 0:02:37.054 **** 2025-09-13 00:55:43.149003 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:55:43.149011 | orchestrator | 2025-09-13 00:55:43.149018 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-09-13 00:55:43.149026 | orchestrator | Saturday 13 September 2025 00:51:54 +0000 (0:00:00.905) 0:02:37.960 **** 2025-09-13 00:55:43.149040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-13 00:55:43.149061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-13 00:55:43.149074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-13 00:55:43.149104 | orchestrator | 2025-09-13 00:55:43.149112 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-09-13 00:55:43.149120 | orchestrator | Saturday 13 September 2025 00:51:59 +0000 (0:00:04.556) 0:02:42.517 **** 2025-09-13 00:55:43.149138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-13 00:55:43.149152 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.149161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-13 00:55:43.149170 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.149187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-13 00:55:43.149200 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.149208 | orchestrator | 2025-09-13 00:55:43.149216 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-09-13 00:55:43.149224 | orchestrator | Saturday 13 September 2025 00:52:00 +0000 (0:00:01.274) 0:02:43.791 **** 2025-09-13 00:55:43.149233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-13 00:55:43.149241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-13 00:55:43.149250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-13 00:55:43.149258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-13 00:55:43.149267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-13 00:55:43.149275 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.149283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-13 00:55:43.149291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-13 00:55:43.149300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-13 00:55:43.149311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-13 00:55:43.149324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-13 00:55:43.149335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-13 00:55:43.149344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-13 00:55:43.149352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-13 00:55:43.149360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-13 00:55:43.149368 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.149376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-13 00:55:43.149384 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.149392 | orchestrator | 2025-09-13 00:55:43.149399 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-09-13 00:55:43.149407 | orchestrator | Saturday 13 September 2025 00:52:01 +0000 (0:00:01.106) 0:02:44.897 **** 2025-09-13 00:55:43.149415 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.149423 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.149431 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.149438 | orchestrator | 2025-09-13 00:55:43.149446 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-09-13 00:55:43.149454 | orchestrator | Saturday 13 September 2025 00:52:02 +0000 (0:00:01.185) 0:02:46.083 **** 2025-09-13 00:55:43.149480 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.149488 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.149496 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.149504 | orchestrator | 2025-09-13 00:55:43.149512 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-09-13 00:55:43.149520 | orchestrator | Saturday 13 September 2025 00:52:04 +0000 (0:00:02.213) 0:02:48.296 **** 2025-09-13 00:55:43.149527 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.149535 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.149543 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.149551 | orchestrator | 2025-09-13 00:55:43.149559 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-09-13 00:55:43.149567 | orchestrator | Saturday 13 September 2025 00:52:05 +0000 (0:00:00.265) 0:02:48.561 **** 2025-09-13 00:55:43.149577 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.149585 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.149594 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.149603 | orchestrator | 2025-09-13 00:55:43.149612 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-09-13 00:55:43.149626 | orchestrator | Saturday 13 September 2025 00:52:05 +0000 (0:00:00.465) 0:02:49.026 **** 2025-09-13 00:55:43.149635 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:55:43.149644 | orchestrator | 2025-09-13 00:55:43.149653 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-09-13 00:55:43.149662 | orchestrator | Saturday 13 September 2025 00:52:06 +0000 (0:00:00.894) 0:02:49.921 **** 2025-09-13 00:55:43.149676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-13 00:55:43.149694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-13 00:55:43.149704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-13 00:55:43.149714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-13 00:55:43.149724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-13 00:55:43.149738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-13 00:55:43.149755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-13 00:55:43.149765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-13 00:55:43.149775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-13 00:55:43.149784 | orchestrator | 2025-09-13 00:55:43.149793 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-09-13 00:55:43.149802 | orchestrator | Saturday 13 September 2025 00:52:10 +0000 (0:00:03.487) 0:02:53.408 **** 2025-09-13 00:55:43.149812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-13 00:55:43.149826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-13 00:55:43.149840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-13 00:55:43.149849 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.149862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-13 00:55:43.149872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-13 00:55:43.149882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-13 00:55:43.149896 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.149906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-13 00:55:43.149920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-13 00:55:43.149933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-13 00:55:43.149941 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.149949 | orchestrator | 2025-09-13 00:55:43.149957 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-09-13 00:55:43.149965 | orchestrator | Saturday 13 September 2025 00:52:10 +0000 (0:00:00.722) 0:02:54.131 **** 2025-09-13 00:55:43.149973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-13 00:55:43.149982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-13 00:55:43.149990 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.149998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-13 00:55:43.150006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-13 00:55:43.150055 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.150066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-13 00:55:43.150074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-13 00:55:43.150082 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.150104 | orchestrator | 2025-09-13 00:55:43.150112 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-09-13 00:55:43.150120 | orchestrator | Saturday 13 September 2025 00:52:11 +0000 (0:00:00.725) 0:02:54.857 **** 2025-09-13 00:55:43.150128 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.150136 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.150143 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.150151 | orchestrator | 2025-09-13 00:55:43.150159 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-09-13 00:55:43.150166 | orchestrator | Saturday 13 September 2025 00:52:12 +0000 (0:00:01.182) 0:02:56.039 **** 2025-09-13 00:55:43.150174 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.150182 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.150190 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.150197 | orchestrator | 2025-09-13 00:55:43.150205 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-09-13 00:55:43.150213 | orchestrator | Saturday 13 September 2025 00:52:14 +0000 (0:00:01.812) 0:02:57.852 **** 2025-09-13 00:55:43.150221 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.150228 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.150236 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.150244 | orchestrator | 2025-09-13 00:55:43.150251 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-09-13 00:55:43.150259 | orchestrator | Saturday 13 September 2025 00:52:15 +0000 (0:00:00.527) 0:02:58.380 **** 2025-09-13 00:55:43.150267 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:55:43.150275 | orchestrator | 2025-09-13 00:55:43.150283 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-09-13 00:55:43.150290 | orchestrator | Saturday 13 September 2025 00:52:16 +0000 (0:00:01.059) 0:02:59.440 **** 2025-09-13 00:55:43.150314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-13 00:55:43.150324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.150338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-13 00:55:43.150347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.150355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-13 00:55:43.150372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.150381 | orchestrator | 2025-09-13 00:55:43.150389 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-09-13 00:55:43.150397 | orchestrator | Saturday 13 September 2025 00:52:19 +0000 (0:00:03.209) 0:03:02.650 **** 2025-09-13 00:55:43.150405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-13 00:55:43.150417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.150426 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.150434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-13 00:55:43.150446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.150454 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.150465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-13 00:55:43.150479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.150487 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.150495 | orchestrator | 2025-09-13 00:55:43.150503 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-09-13 00:55:43.150511 | orchestrator | Saturday 13 September 2025 00:52:20 +0000 (0:00:00.777) 0:03:03.427 **** 2025-09-13 00:55:43.150519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-13 00:55:43.150527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-13 00:55:43.150535 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.150542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-13 00:55:43.150551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-13 00:55:43.150559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-13 00:55:43.150566 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.150574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-13 00:55:43.150582 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.150590 | orchestrator | 2025-09-13 00:55:43.150597 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-09-13 00:55:43.150605 | orchestrator | Saturday 13 September 2025 00:52:20 +0000 (0:00:00.806) 0:03:04.234 **** 2025-09-13 00:55:43.150613 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.150621 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.150628 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.150636 | orchestrator | 2025-09-13 00:55:43.150644 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-09-13 00:55:43.150651 | orchestrator | Saturday 13 September 2025 00:52:22 +0000 (0:00:01.244) 0:03:05.479 **** 2025-09-13 00:55:43.150659 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.150680 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.150688 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.150695 | orchestrator | 2025-09-13 00:55:43.150703 | orchestrator | TASK [include_role : manila] *************************************************** 2025-09-13 00:55:43.150711 | orchestrator | Saturday 13 September 2025 00:52:24 +0000 (0:00:02.064) 0:03:07.543 **** 2025-09-13 00:55:43.150722 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:55:43.150735 | orchestrator | 2025-09-13 00:55:43.150743 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-09-13 00:55:43.150751 | orchestrator | Saturday 13 September 2025 00:52:25 +0000 (0:00:01.075) 0:03:08.619 **** 2025-09-13 00:55:43.150765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-13 00:55:43.150773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.150782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.150790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.150799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-13 00:55:43.150811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.150827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.150835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.150844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-13 00:55:43.150852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.150860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.150872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.150885 | orchestrator | 2025-09-13 00:55:43.150893 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-09-13 00:55:43.150900 | orchestrator | Saturday 13 September 2025 00:52:28 +0000 (0:00:03.190) 0:03:11.810 **** 2025-09-13 00:55:43.150912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-13 00:55:43.150920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.150928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.150937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.150945 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.150953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-13 00:55:43.150970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.150982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.150990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.150999 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.151007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-13 00:55:43.151015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.151023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.151041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.151049 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.151057 | orchestrator | 2025-09-13 00:55:43.151065 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-09-13 00:55:43.151073 | orchestrator | Saturday 13 September 2025 00:52:29 +0000 (0:00:00.596) 0:03:12.407 **** 2025-09-13 00:55:43.151081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-13 00:55:43.151171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-13 00:55:43.151181 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.151189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-13 00:55:43.151197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-13 00:55:43.151205 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.151213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-13 00:55:43.151221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-13 00:55:43.151229 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.151237 | orchestrator | 2025-09-13 00:55:43.151244 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-09-13 00:55:43.151252 | orchestrator | Saturday 13 September 2025 00:52:30 +0000 (0:00:00.965) 0:03:13.372 **** 2025-09-13 00:55:43.151260 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.151268 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.151276 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.151284 | orchestrator | 2025-09-13 00:55:43.151291 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-09-13 00:55:43.151299 | orchestrator | Saturday 13 September 2025 00:52:31 +0000 (0:00:01.231) 0:03:14.604 **** 2025-09-13 00:55:43.151307 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.151315 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.151323 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.151331 | orchestrator | 2025-09-13 00:55:43.151339 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-09-13 00:55:43.151347 | orchestrator | Saturday 13 September 2025 00:52:33 +0000 (0:00:01.962) 0:03:16.566 **** 2025-09-13 00:55:43.151360 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:55:43.151368 | orchestrator | 2025-09-13 00:55:43.151375 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-09-13 00:55:43.151383 | orchestrator | Saturday 13 September 2025 00:52:34 +0000 (0:00:01.300) 0:03:17.867 **** 2025-09-13 00:55:43.151391 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-13 00:55:43.151399 | orchestrator | 2025-09-13 00:55:43.151407 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-09-13 00:55:43.151415 | orchestrator | Saturday 13 September 2025 00:52:37 +0000 (0:00:02.894) 0:03:20.762 **** 2025-09-13 00:55:43.151430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-13 00:55:43.151443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-13 00:55:43.151451 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.151460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-13 00:55:43.151474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-13 00:55:43.151483 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.151500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-13 00:55:43.151510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-13 00:55:43.151523 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.151531 | orchestrator | 2025-09-13 00:55:43.151539 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-09-13 00:55:43.151547 | orchestrator | Saturday 13 September 2025 00:52:39 +0000 (0:00:02.534) 0:03:23.297 **** 2025-09-13 00:55:43.151555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-13 00:55:43.151569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-13 00:55:43.151578 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.151589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-13 00:55:43.151605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-13 00:55:43.151614 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.151631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-13 00:55:43.151639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-13 00:55:43.151646 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.151653 | orchestrator | 2025-09-13 00:55:43.151659 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-09-13 00:55:43.151670 | orchestrator | Saturday 13 September 2025 00:52:42 +0000 (0:00:02.317) 0:03:25.614 **** 2025-09-13 00:55:43.151677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-13 00:55:43.151684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-13 00:55:43.151691 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.151698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-13 00:55:43.151705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-13 00:55:43.151712 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.151723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-13 00:55:43.151733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-13 00:55:43.151740 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.151751 | orchestrator | 2025-09-13 00:55:43.151758 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-09-13 00:55:43.151764 | orchestrator | Saturday 13 September 2025 00:52:45 +0000 (0:00:02.923) 0:03:28.537 **** 2025-09-13 00:55:43.151771 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.151778 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.151784 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.151791 | orchestrator | 2025-09-13 00:55:43.151798 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-09-13 00:55:43.151804 | orchestrator | Saturday 13 September 2025 00:52:47 +0000 (0:00:01.850) 0:03:30.388 **** 2025-09-13 00:55:43.151811 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.151818 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.151824 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.151831 | orchestrator | 2025-09-13 00:55:43.151838 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-09-13 00:55:43.151844 | orchestrator | Saturday 13 September 2025 00:52:48 +0000 (0:00:01.426) 0:03:31.815 **** 2025-09-13 00:55:43.151851 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.151857 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.151864 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.151871 | orchestrator | 2025-09-13 00:55:43.151877 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-09-13 00:55:43.151884 | orchestrator | Saturday 13 September 2025 00:52:48 +0000 (0:00:00.311) 0:03:32.126 **** 2025-09-13 00:55:43.151891 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:55:43.151897 | orchestrator | 2025-09-13 00:55:43.151904 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-09-13 00:55:43.151910 | orchestrator | Saturday 13 September 2025 00:52:50 +0000 (0:00:01.412) 0:03:33.539 **** 2025-09-13 00:55:43.151917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-13 00:55:43.151925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-13 00:55:43.151936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-13 00:55:43.151947 | orchestrator | 2025-09-13 00:55:43.151954 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-09-13 00:55:43.151964 | orchestrator | Saturday 13 September 2025 00:52:51 +0000 (0:00:01.399) 0:03:34.939 **** 2025-09-13 00:55:43.151971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-13 00:55:43.151978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-13 00:55:43.151985 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.151991 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.151998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-13 00:55:43.152005 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.152012 | orchestrator | 2025-09-13 00:55:43.152019 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-09-13 00:55:43.152025 | orchestrator | Saturday 13 September 2025 00:52:51 +0000 (0:00:00.374) 0:03:35.313 **** 2025-09-13 00:55:43.152032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-13 00:55:43.152039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-13 00:55:43.152046 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.152053 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.152063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-13 00:55:43.152074 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.152081 | orchestrator | 2025-09-13 00:55:43.152098 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-09-13 00:55:43.152105 | orchestrator | Saturday 13 September 2025 00:52:52 +0000 (0:00:00.862) 0:03:36.176 **** 2025-09-13 00:55:43.152112 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.152118 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.152125 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.152131 | orchestrator | 2025-09-13 00:55:43.152138 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-09-13 00:55:43.152145 | orchestrator | Saturday 13 September 2025 00:52:53 +0000 (0:00:00.448) 0:03:36.625 **** 2025-09-13 00:55:43.152151 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.152158 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.152164 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.152171 | orchestrator | 2025-09-13 00:55:43.152178 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-09-13 00:55:43.152184 | orchestrator | Saturday 13 September 2025 00:52:54 +0000 (0:00:01.296) 0:03:37.921 **** 2025-09-13 00:55:43.152191 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.152197 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.152204 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.152211 | orchestrator | 2025-09-13 00:55:43.152217 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-09-13 00:55:43.152224 | orchestrator | Saturday 13 September 2025 00:52:54 +0000 (0:00:00.308) 0:03:38.229 **** 2025-09-13 00:55:43.152230 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:55:43.152237 | orchestrator | 2025-09-13 00:55:43.152244 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-09-13 00:55:43.152250 | orchestrator | Saturday 13 September 2025 00:52:56 +0000 (0:00:01.486) 0:03:39.716 **** 2025-09-13 00:55:43.152257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-13 00:55:43.152265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.152272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.152300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-13 00:55:43.152310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.152317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-13 00:55:43.152325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.152331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.152463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.152477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.152485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-13 00:55:43.152492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-13 00:55:43.152500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-13 00:55:43.152507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.152518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.152570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-13 00:55:43.152584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-13 00:55:43.152591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-13 00:55:43.152598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.152605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-13 00:55:43.152619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.152626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-13 00:55:43.152677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-13 00:55:43.152691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.152698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.152706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-13 00:55:43.152718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-13 00:55:43.152725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-13 00:55:43.152772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-13 00:55:43.152786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-13 00:55:43.152793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.152800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.152844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-13 00:55:43.152895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-13 00:55:43.152909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.152916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.152935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-13 00:55:43.152947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.152954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-13 00:55:43.152961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-13 00:55:43.153010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.153024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-13 00:55:43.153041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.153053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-13 00:55:43.153060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-13 00:55:43.153067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.153142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-13 00:55:43.153157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-13 00:55:43.153164 | orchestrator | 2025-09-13 00:55:43.153172 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-09-13 00:55:43.153179 | orchestrator | Saturday 13 September 2025 00:53:00 +0000 (0:00:04.361) 0:03:44.078 **** 2025-09-13 00:55:43.153186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-13 00:55:43.153198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.153205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.153265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.153280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-13 00:55:43.153288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-13 00:55:43.153300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.153307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.153365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.153376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-13 00:55:43.153387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.153394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-13 00:55:43.153406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-13 00:55:43.153413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.153420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.153470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-13 00:55:43.153486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-13 00:55:43.153503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.153515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-13 00:55:43.153522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-13 00:55:43.153530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-13 00:55:43.153579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.153589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-13 00:55:43.153600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.153621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.153628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.153635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-13 00:55:43.153693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.153708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-13 00:55:43.153720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.153727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-13 00:55:43.153734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-13 00:55:43.153741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-13 00:55:43.153748 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.153823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.153837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-13 00:55:43.153850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-13 00:55:43.153857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.153864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-13 00:55:43.153871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-13 00:55:43.153878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.153905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-13 00:55:43.153921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.153928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-13 00:55:43.153943 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.153950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-13 00:55:43.153957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-13 00:55:43.153964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.153997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-13 00:55:43.154043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-13 00:55:43.154053 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.154060 | orchestrator | 2025-09-13 00:55:43.154067 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-09-13 00:55:43.154074 | orchestrator | Saturday 13 September 2025 00:53:02 +0000 (0:00:01.506) 0:03:45.585 **** 2025-09-13 00:55:43.154081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-13 00:55:43.154131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-13 00:55:43.154151 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.154158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-13 00:55:43.154165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-13 00:55:43.154172 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.154179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-13 00:55:43.154185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-13 00:55:43.154192 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.154198 | orchestrator | 2025-09-13 00:55:43.154205 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-09-13 00:55:43.154212 | orchestrator | Saturday 13 September 2025 00:53:04 +0000 (0:00:02.072) 0:03:47.658 **** 2025-09-13 00:55:43.154218 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.154225 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.154232 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.154238 | orchestrator | 2025-09-13 00:55:43.154245 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-09-13 00:55:43.154251 | orchestrator | Saturday 13 September 2025 00:53:05 +0000 (0:00:01.283) 0:03:48.941 **** 2025-09-13 00:55:43.154258 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.154265 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.154271 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.154278 | orchestrator | 2025-09-13 00:55:43.154284 | orchestrator | TASK [include_role : placement] ************************************************ 2025-09-13 00:55:43.154291 | orchestrator | Saturday 13 September 2025 00:53:07 +0000 (0:00:02.199) 0:03:51.141 **** 2025-09-13 00:55:43.154298 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:55:43.154310 | orchestrator | 2025-09-13 00:55:43.154316 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-09-13 00:55:43.154323 | orchestrator | Saturday 13 September 2025 00:53:09 +0000 (0:00:01.319) 0:03:52.461 **** 2025-09-13 00:55:43.154356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-13 00:55:43.154370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-13 00:55:43.154378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-13 00:55:43.154385 | orchestrator | 2025-09-13 00:55:43.154392 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-09-13 00:55:43.154398 | orchestrator | Saturday 13 September 2025 00:53:12 +0000 (0:00:03.740) 0:03:56.201 **** 2025-09-13 00:55:43.154405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-13 00:55:43.154417 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.154442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-13 00:55:43.154450 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.154465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-13 00:55:43.154474 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.154481 | orchestrator | 2025-09-13 00:55:43.154489 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-09-13 00:55:43.154497 | orchestrator | Saturday 13 September 2025 00:53:13 +0000 (0:00:00.510) 0:03:56.711 **** 2025-09-13 00:55:43.154504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-13 00:55:43.154512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-13 00:55:43.154520 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.154528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-13 00:55:43.154535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-13 00:55:43.154543 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.154550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-13 00:55:43.154558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-13 00:55:43.154571 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.154579 | orchestrator | 2025-09-13 00:55:43.154586 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-09-13 00:55:43.154594 | orchestrator | Saturday 13 September 2025 00:53:14 +0000 (0:00:00.748) 0:03:57.460 **** 2025-09-13 00:55:43.154601 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.154608 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.154615 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.154623 | orchestrator | 2025-09-13 00:55:43.154630 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-09-13 00:55:43.154638 | orchestrator | Saturday 13 September 2025 00:53:16 +0000 (0:00:01.957) 0:03:59.417 **** 2025-09-13 00:55:43.154645 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.154653 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.154659 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.154667 | orchestrator | 2025-09-13 00:55:43.154673 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-09-13 00:55:43.154680 | orchestrator | Saturday 13 September 2025 00:53:17 +0000 (0:00:01.893) 0:04:01.310 **** 2025-09-13 00:55:43.154687 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:55:43.154694 | orchestrator | 2025-09-13 00:55:43.154702 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-09-13 00:55:43.154709 | orchestrator | Saturday 13 September 2025 00:53:19 +0000 (0:00:01.525) 0:04:02.836 **** 2025-09-13 00:55:43.154738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-13 00:55:43.154748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.154755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.154769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-13 00:55:43.154777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.154805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-13 00:55:43.154814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.154820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.154830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.154837 | orchestrator | 2025-09-13 00:55:43.154843 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-09-13 00:55:43.154850 | orchestrator | Saturday 13 September 2025 00:53:24 +0000 (0:00:04.612) 0:04:07.449 **** 2025-09-13 00:55:43.154873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-13 00:55:43.154885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.154891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.154898 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.154904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-13 00:55:43.154915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.154922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.154928 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.154954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-13 00:55:43.154962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.154973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.154980 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.154986 | orchestrator | 2025-09-13 00:55:43.154992 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-09-13 00:55:43.154998 | orchestrator | Saturday 13 September 2025 00:53:25 +0000 (0:00:01.215) 0:04:08.664 **** 2025-09-13 00:55:43.155004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-13 00:55:43.155011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-13 00:55:43.155018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-13 00:55:43.155024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-13 00:55:43.155030 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.155036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-13 00:55:43.155043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-13 00:55:43.155049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-13 00:55:43.155055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-13 00:55:43.155078 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.155085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-13 00:55:43.155106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-13 00:55:43.155116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-13 00:55:43.155123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-13 00:55:43.155129 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.155139 | orchestrator | 2025-09-13 00:55:43.155145 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-09-13 00:55:43.155152 | orchestrator | Saturday 13 September 2025 00:53:26 +0000 (0:00:00.927) 0:04:09.592 **** 2025-09-13 00:55:43.155158 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.155164 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.155170 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.155176 | orchestrator | 2025-09-13 00:55:43.155182 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-09-13 00:55:43.155188 | orchestrator | Saturday 13 September 2025 00:53:27 +0000 (0:00:01.305) 0:04:10.898 **** 2025-09-13 00:55:43.155195 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.155201 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.155207 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.155213 | orchestrator | 2025-09-13 00:55:43.155219 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-09-13 00:55:43.155225 | orchestrator | Saturday 13 September 2025 00:53:29 +0000 (0:00:02.002) 0:04:12.901 **** 2025-09-13 00:55:43.155231 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:55:43.155237 | orchestrator | 2025-09-13 00:55:43.155244 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-09-13 00:55:43.155250 | orchestrator | Saturday 13 September 2025 00:53:31 +0000 (0:00:01.563) 0:04:14.464 **** 2025-09-13 00:55:43.155256 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-09-13 00:55:43.155262 | orchestrator | 2025-09-13 00:55:43.155268 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-09-13 00:55:43.155275 | orchestrator | Saturday 13 September 2025 00:53:31 +0000 (0:00:00.804) 0:04:15.269 **** 2025-09-13 00:55:43.155281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-13 00:55:43.155288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-13 00:55:43.155294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-13 00:55:43.155301 | orchestrator | 2025-09-13 00:55:43.155307 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-09-13 00:55:43.155314 | orchestrator | Saturday 13 September 2025 00:53:36 +0000 (0:00:04.445) 0:04:19.714 **** 2025-09-13 00:55:43.155338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-13 00:55:43.155350 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.155359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-13 00:55:43.155366 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.155372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-13 00:55:43.155379 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.155385 | orchestrator | 2025-09-13 00:55:43.155391 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-09-13 00:55:43.155397 | orchestrator | Saturday 13 September 2025 00:53:37 +0000 (0:00:01.072) 0:04:20.787 **** 2025-09-13 00:55:43.155403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-13 00:55:43.155410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-13 00:55:43.155416 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.155422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-13 00:55:43.155429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-13 00:55:43.155435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-13 00:55:43.155441 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.155448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-13 00:55:43.155454 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.155460 | orchestrator | 2025-09-13 00:55:43.155466 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-13 00:55:43.155472 | orchestrator | Saturday 13 September 2025 00:53:38 +0000 (0:00:01.433) 0:04:22.220 **** 2025-09-13 00:55:43.155478 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.155485 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.155495 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.155501 | orchestrator | 2025-09-13 00:55:43.155507 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-13 00:55:43.155513 | orchestrator | Saturday 13 September 2025 00:53:41 +0000 (0:00:02.373) 0:04:24.594 **** 2025-09-13 00:55:43.155519 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.155525 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.155531 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.155537 | orchestrator | 2025-09-13 00:55:43.155544 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-09-13 00:55:43.155550 | orchestrator | Saturday 13 September 2025 00:53:44 +0000 (0:00:02.916) 0:04:27.510 **** 2025-09-13 00:55:43.155573 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-09-13 00:55:43.155581 | orchestrator | 2025-09-13 00:55:43.155587 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-09-13 00:55:43.155593 | orchestrator | Saturday 13 September 2025 00:53:45 +0000 (0:00:01.381) 0:04:28.891 **** 2025-09-13 00:55:43.155605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-13 00:55:43.155611 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.155618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-13 00:55:43.155624 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.155630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-13 00:55:43.155637 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.155643 | orchestrator | 2025-09-13 00:55:43.155649 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-09-13 00:55:43.155655 | orchestrator | Saturday 13 September 2025 00:53:46 +0000 (0:00:01.227) 0:04:30.119 **** 2025-09-13 00:55:43.155662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-13 00:55:43.155668 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.155674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-13 00:55:43.155685 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.155691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-13 00:55:43.155697 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.155703 | orchestrator | 2025-09-13 00:55:43.155709 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-09-13 00:55:43.155716 | orchestrator | Saturday 13 September 2025 00:53:48 +0000 (0:00:01.388) 0:04:31.508 **** 2025-09-13 00:55:43.155722 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.155728 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.155734 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.155740 | orchestrator | 2025-09-13 00:55:43.155763 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-13 00:55:43.155771 | orchestrator | Saturday 13 September 2025 00:53:49 +0000 (0:00:01.848) 0:04:33.356 **** 2025-09-13 00:55:43.155777 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:55:43.155783 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:55:43.155789 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:55:43.155795 | orchestrator | 2025-09-13 00:55:43.155802 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-13 00:55:43.155808 | orchestrator | Saturday 13 September 2025 00:53:52 +0000 (0:00:02.360) 0:04:35.716 **** 2025-09-13 00:55:43.155814 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:55:43.155820 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:55:43.155826 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:55:43.155832 | orchestrator | 2025-09-13 00:55:43.155838 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-09-13 00:55:43.155847 | orchestrator | Saturday 13 September 2025 00:53:55 +0000 (0:00:02.889) 0:04:38.605 **** 2025-09-13 00:55:43.155853 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-09-13 00:55:43.155859 | orchestrator | 2025-09-13 00:55:43.155866 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-09-13 00:55:43.155872 | orchestrator | Saturday 13 September 2025 00:53:56 +0000 (0:00:00.852) 0:04:39.458 **** 2025-09-13 00:55:43.155878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-13 00:55:43.155884 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.155891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-13 00:55:43.155901 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.155907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-13 00:55:43.155913 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.155920 | orchestrator | 2025-09-13 00:55:43.155926 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-09-13 00:55:43.155932 | orchestrator | Saturday 13 September 2025 00:53:57 +0000 (0:00:01.304) 0:04:40.762 **** 2025-09-13 00:55:43.155938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-13 00:55:43.155944 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.155951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-13 00:55:43.155957 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.155980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-13 00:55:43.155988 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.155994 | orchestrator | 2025-09-13 00:55:43.156000 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-09-13 00:55:43.156010 | orchestrator | Saturday 13 September 2025 00:53:58 +0000 (0:00:01.347) 0:04:42.110 **** 2025-09-13 00:55:43.156016 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.156022 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.156028 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.156034 | orchestrator | 2025-09-13 00:55:43.156040 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-13 00:55:43.156046 | orchestrator | Saturday 13 September 2025 00:54:00 +0000 (0:00:01.584) 0:04:43.694 **** 2025-09-13 00:55:43.156053 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:55:43.156059 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:55:43.156065 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:55:43.156071 | orchestrator | 2025-09-13 00:55:43.156077 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-13 00:55:43.156100 | orchestrator | Saturday 13 September 2025 00:54:02 +0000 (0:00:02.431) 0:04:46.126 **** 2025-09-13 00:55:43.156107 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:55:43.156113 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:55:43.156119 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:55:43.156125 | orchestrator | 2025-09-13 00:55:43.156132 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-09-13 00:55:43.156138 | orchestrator | Saturday 13 September 2025 00:54:06 +0000 (0:00:03.272) 0:04:49.398 **** 2025-09-13 00:55:43.156144 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:55:43.156150 | orchestrator | 2025-09-13 00:55:43.156156 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-09-13 00:55:43.156162 | orchestrator | Saturday 13 September 2025 00:54:07 +0000 (0:00:01.593) 0:04:50.992 **** 2025-09-13 00:55:43.156169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-13 00:55:43.156176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-13 00:55:43.156183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-13 00:55:43.156207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-13 00:55:43.156215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.156226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-13 00:55:43.156288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-13 00:55:43.156304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-13 00:55:43.156310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-13 00:55:43.156338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.156349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-13 00:55:43.156360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-13 00:55:43.156367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-13 00:55:43.156373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-13 00:55:43.156380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.156386 | orchestrator | 2025-09-13 00:55:43.156392 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-09-13 00:55:43.156399 | orchestrator | Saturday 13 September 2025 00:54:11 +0000 (0:00:03.396) 0:04:54.389 **** 2025-09-13 00:55:43.156422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-13 00:55:43.156440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-13 00:55:43.156446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-13 00:55:43.156453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-13 00:55:43.156459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.156466 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.156472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-13 00:55:43.156496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-13 00:55:43.156508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-13 00:55:43.156518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-13 00:55:43.156525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.156531 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.156538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-13 00:55:43.156544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-13 00:55:43.156550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-13 00:55:43.156578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-13 00:55:43.156589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-13 00:55:43.156596 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.156602 | orchestrator | 2025-09-13 00:55:43.156608 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-09-13 00:55:43.156615 | orchestrator | Saturday 13 September 2025 00:54:11 +0000 (0:00:00.705) 0:04:55.095 **** 2025-09-13 00:55:43.156621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-13 00:55:43.156628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-13 00:55:43.156634 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.156640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-13 00:55:43.156647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-13 00:55:43.156653 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.156659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-13 00:55:43.156665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-13 00:55:43.156671 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.156678 | orchestrator | 2025-09-13 00:55:43.156684 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-09-13 00:55:43.156690 | orchestrator | Saturday 13 September 2025 00:54:13 +0000 (0:00:01.499) 0:04:56.594 **** 2025-09-13 00:55:43.156696 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.156702 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.156708 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.156714 | orchestrator | 2025-09-13 00:55:43.156720 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-09-13 00:55:43.156727 | orchestrator | Saturday 13 September 2025 00:54:14 +0000 (0:00:01.417) 0:04:58.011 **** 2025-09-13 00:55:43.156733 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.156739 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.156749 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.156755 | orchestrator | 2025-09-13 00:55:43.156761 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-09-13 00:55:43.156767 | orchestrator | Saturday 13 September 2025 00:54:16 +0000 (0:00:01.932) 0:04:59.944 **** 2025-09-13 00:55:43.156774 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:55:43.156780 | orchestrator | 2025-09-13 00:55:43.156786 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-09-13 00:55:43.156792 | orchestrator | Saturday 13 September 2025 00:54:17 +0000 (0:00:01.312) 0:05:01.257 **** 2025-09-13 00:55:43.156815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-13 00:55:43.156826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-13 00:55:43.156833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-13 00:55:43.156840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-13 00:55:43.156867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-13 00:55:43.156879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-13 00:55:43.156886 | orchestrator | 2025-09-13 00:55:43.156892 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-09-13 00:55:43.156899 | orchestrator | Saturday 13 September 2025 00:54:22 +0000 (0:00:05.086) 0:05:06.343 **** 2025-09-13 00:55:43.156905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-13 00:55:43.156912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-13 00:55:43.156923 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.156930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-13 00:55:43.156957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-13 00:55:43.156965 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.156972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-13 00:55:43.156978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-13 00:55:43.157000 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.157006 | orchestrator | 2025-09-13 00:55:43.157012 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-09-13 00:55:43.157018 | orchestrator | Saturday 13 September 2025 00:54:23 +0000 (0:00:00.599) 0:05:06.942 **** 2025-09-13 00:55:43.157024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-13 00:55:43.157031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-13 00:55:43.157037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-13 00:55:43.157044 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.157050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-13 00:55:43.157072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-13 00:55:43.157079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-13 00:55:43.157086 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.157111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-13 00:55:43.157117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-13 00:55:43.157124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-13 00:55:43.157130 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.157136 | orchestrator | 2025-09-13 00:55:43.157142 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-09-13 00:55:43.157149 | orchestrator | Saturday 13 September 2025 00:54:24 +0000 (0:00:00.842) 0:05:07.785 **** 2025-09-13 00:55:43.157155 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.157161 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.157167 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.157173 | orchestrator | 2025-09-13 00:55:43.157179 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-09-13 00:55:43.157186 | orchestrator | Saturday 13 September 2025 00:54:25 +0000 (0:00:00.781) 0:05:08.566 **** 2025-09-13 00:55:43.157192 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.157203 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.157209 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.157215 | orchestrator | 2025-09-13 00:55:43.157222 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-09-13 00:55:43.157228 | orchestrator | Saturday 13 September 2025 00:54:26 +0000 (0:00:01.407) 0:05:09.973 **** 2025-09-13 00:55:43.157234 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:55:43.157240 | orchestrator | 2025-09-13 00:55:43.157246 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-09-13 00:55:43.157253 | orchestrator | Saturday 13 September 2025 00:54:28 +0000 (0:00:01.467) 0:05:11.441 **** 2025-09-13 00:55:43.157260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-13 00:55:43.157267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-13 00:55:43.157273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:55:43.157301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-13 00:55:43.157309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:55:43.157315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-13 00:55:43.157326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-13 00:55:43.157332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:55:43.157339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:55:43.157345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-13 00:55:43.157371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-13 00:55:43.157381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-13 00:55:43.157388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:55:43.157398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:55:43.157405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-13 00:55:43.157411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-13 00:55:43.157421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-13 00:55:43.157434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:55:43.157441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:55:43.157451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-13 00:55:43.157458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-13 00:55:43.157465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-13 00:55:43.157475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:55:43.157482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:55:43.157491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-13 00:55:43.157502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-13 00:55:43.157509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-13 00:55:43.157515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:55:43.157522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:55:43.157531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-13 00:55:43.157538 | orchestrator | 2025-09-13 00:55:43.157544 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-09-13 00:55:43.157551 | orchestrator | Saturday 13 September 2025 00:54:33 +0000 (0:00:04.968) 0:05:16.409 **** 2025-09-13 00:55:43.157560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-13 00:55:43.157570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-13 00:55:43.157577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:55:43.157584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:55:43.157590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-13 00:55:43.157599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-13 00:55:43.157609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-13 00:55:43.157621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-13 00:55:43.157628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:55:43.157634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-13 00:55:43.157641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:55:43.157647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:55:43.157656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-13 00:55:43.157667 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.157676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:55:43.157683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-13 00:55:43.157689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-13 00:55:43.157696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-13 00:55:43.157703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:55:43.157712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:55:43.157725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-13 00:55:43.157731 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.157741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-13 00:55:43.157747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-13 00:55:43.157754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:55:43.157760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:55:43.157767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-13 00:55:43.157776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-13 00:55:43.157790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-13 00:55:43.157796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:55:43.157803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 00:55:43.157809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-13 00:55:43.157815 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.157822 | orchestrator | 2025-09-13 00:55:43.157828 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-09-13 00:55:43.157834 | orchestrator | Saturday 13 September 2025 00:54:34 +0000 (0:00:01.299) 0:05:17.709 **** 2025-09-13 00:55:43.157841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-13 00:55:43.157847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-13 00:55:43.157853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-13 00:55:43.157864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-13 00:55:43.157871 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.157877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-13 00:55:43.157886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-13 00:55:43.157893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-13 00:55:43.157903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-13 00:55:43.157909 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.157915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-13 00:55:43.157922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-13 00:55:43.157928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-13 00:55:43.157935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-13 00:55:43.157941 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.157947 | orchestrator | 2025-09-13 00:55:43.157953 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-09-13 00:55:43.157960 | orchestrator | Saturday 13 September 2025 00:54:35 +0000 (0:00:01.067) 0:05:18.776 **** 2025-09-13 00:55:43.157966 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.157972 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.157978 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.157985 | orchestrator | 2025-09-13 00:55:43.157991 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-09-13 00:55:43.157997 | orchestrator | Saturday 13 September 2025 00:54:35 +0000 (0:00:00.475) 0:05:19.252 **** 2025-09-13 00:55:43.158003 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.158009 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.158040 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.158048 | orchestrator | 2025-09-13 00:55:43.158054 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-09-13 00:55:43.158061 | orchestrator | Saturday 13 September 2025 00:54:37 +0000 (0:00:01.462) 0:05:20.714 **** 2025-09-13 00:55:43.158071 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:55:43.158077 | orchestrator | 2025-09-13 00:55:43.158083 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-09-13 00:55:43.158128 | orchestrator | Saturday 13 September 2025 00:54:39 +0000 (0:00:01.749) 0:05:22.463 **** 2025-09-13 00:55:43.158136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-13 00:55:43.158151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-13 00:55:43.158159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-13 00:55:43.158166 | orchestrator | 2025-09-13 00:55:43.158172 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-09-13 00:55:43.158178 | orchestrator | Saturday 13 September 2025 00:54:41 +0000 (0:00:02.483) 0:05:24.947 **** 2025-09-13 00:55:43.158184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-13 00:55:43.158195 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.158202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-13 00:55:43.158209 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.158222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-13 00:55:43.158229 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.158235 | orchestrator | 2025-09-13 00:55:43.158241 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-09-13 00:55:43.158247 | orchestrator | Saturday 13 September 2025 00:54:41 +0000 (0:00:00.403) 0:05:25.350 **** 2025-09-13 00:55:43.158254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-13 00:55:43.158260 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.158266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-13 00:55:43.158272 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.158279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-13 00:55:43.158285 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.158291 | orchestrator | 2025-09-13 00:55:43.158297 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-09-13 00:55:43.158303 | orchestrator | Saturday 13 September 2025 00:54:42 +0000 (0:00:01.000) 0:05:26.351 **** 2025-09-13 00:55:43.158315 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.158321 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.158327 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.158333 | orchestrator | 2025-09-13 00:55:43.158339 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-09-13 00:55:43.158346 | orchestrator | Saturday 13 September 2025 00:54:43 +0000 (0:00:00.435) 0:05:26.787 **** 2025-09-13 00:55:43.158352 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.158358 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.158364 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.158370 | orchestrator | 2025-09-13 00:55:43.158376 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-09-13 00:55:43.158382 | orchestrator | Saturday 13 September 2025 00:54:44 +0000 (0:00:01.313) 0:05:28.100 **** 2025-09-13 00:55:43.158389 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:55:43.158395 | orchestrator | 2025-09-13 00:55:43.158401 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-09-13 00:55:43.158407 | orchestrator | Saturday 13 September 2025 00:54:46 +0000 (0:00:01.766) 0:05:29.867 **** 2025-09-13 00:55:43.158413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-13 00:55:43.158423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-13 00:55:43.158433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-13 00:55:43.158447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-13 00:55:43.158455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-13 00:55:43.158461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-13 00:55:43.158468 | orchestrator | 2025-09-13 00:55:43.158477 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-09-13 00:55:43.158483 | orchestrator | Saturday 13 September 2025 00:54:52 +0000 (0:00:06.055) 0:05:35.923 **** 2025-09-13 00:55:43.158492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-13 00:55:43.158503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-13 00:55:43.158509 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.158516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-13 00:55:43.158522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-13 00:55:43.158529 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.158542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-13 00:55:43.158548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-13 00:55:43.158559 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.158565 | orchestrator | 2025-09-13 00:55:43.158572 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-09-13 00:55:43.158578 | orchestrator | Saturday 13 September 2025 00:54:53 +0000 (0:00:00.694) 0:05:36.617 **** 2025-09-13 00:55:43.158584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-13 00:55:43.158590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-13 00:55:43.158597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-13 00:55:43.158603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-13 00:55:43.158609 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.158616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-13 00:55:43.158622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-13 00:55:43.158628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-13 00:55:43.158634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-13 00:55:43.158641 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.158647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-13 00:55:43.158655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-13 00:55:43.158661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-13 00:55:43.158667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-13 00:55:43.158676 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.158681 | orchestrator | 2025-09-13 00:55:43.158689 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-09-13 00:55:43.158695 | orchestrator | Saturday 13 September 2025 00:54:54 +0000 (0:00:01.664) 0:05:38.282 **** 2025-09-13 00:55:43.158700 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.158706 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.158711 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.158716 | orchestrator | 2025-09-13 00:55:43.158722 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-09-13 00:55:43.158727 | orchestrator | Saturday 13 September 2025 00:54:56 +0000 (0:00:01.360) 0:05:39.642 **** 2025-09-13 00:55:43.158733 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.158738 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.158743 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.158749 | orchestrator | 2025-09-13 00:55:43.158754 | orchestrator | TASK [include_role : swift] **************************************************** 2025-09-13 00:55:43.158759 | orchestrator | Saturday 13 September 2025 00:54:58 +0000 (0:00:02.163) 0:05:41.806 **** 2025-09-13 00:55:43.158765 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.158770 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.158776 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.158781 | orchestrator | 2025-09-13 00:55:43.158786 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-09-13 00:55:43.158792 | orchestrator | Saturday 13 September 2025 00:54:58 +0000 (0:00:00.339) 0:05:42.146 **** 2025-09-13 00:55:43.158797 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.158803 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.158808 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.158813 | orchestrator | 2025-09-13 00:55:43.158819 | orchestrator | TASK [include_role : trove] **************************************************** 2025-09-13 00:55:43.158824 | orchestrator | Saturday 13 September 2025 00:54:59 +0000 (0:00:00.302) 0:05:42.448 **** 2025-09-13 00:55:43.158829 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.158835 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.158840 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.158846 | orchestrator | 2025-09-13 00:55:43.158851 | orchestrator | TASK [include_role : venus] **************************************************** 2025-09-13 00:55:43.158856 | orchestrator | Saturday 13 September 2025 00:54:59 +0000 (0:00:00.640) 0:05:43.088 **** 2025-09-13 00:55:43.158862 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.158867 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.158872 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.158878 | orchestrator | 2025-09-13 00:55:43.158883 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-09-13 00:55:43.158888 | orchestrator | Saturday 13 September 2025 00:55:00 +0000 (0:00:00.330) 0:05:43.418 **** 2025-09-13 00:55:43.158894 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.158899 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.158905 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.158910 | orchestrator | 2025-09-13 00:55:43.158915 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-09-13 00:55:43.158921 | orchestrator | Saturday 13 September 2025 00:55:00 +0000 (0:00:00.329) 0:05:43.748 **** 2025-09-13 00:55:43.158926 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.158931 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.158937 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.158942 | orchestrator | 2025-09-13 00:55:43.158947 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-09-13 00:55:43.158953 | orchestrator | Saturday 13 September 2025 00:55:01 +0000 (0:00:00.875) 0:05:44.624 **** 2025-09-13 00:55:43.158962 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:55:43.158967 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:55:43.158972 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:55:43.158978 | orchestrator | 2025-09-13 00:55:43.158983 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-09-13 00:55:43.158989 | orchestrator | Saturday 13 September 2025 00:55:01 +0000 (0:00:00.684) 0:05:45.309 **** 2025-09-13 00:55:43.158994 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:55:43.158999 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:55:43.159005 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:55:43.159010 | orchestrator | 2025-09-13 00:55:43.159016 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-09-13 00:55:43.159021 | orchestrator | Saturday 13 September 2025 00:55:02 +0000 (0:00:00.356) 0:05:45.665 **** 2025-09-13 00:55:43.159027 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:55:43.159032 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:55:43.159037 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:55:43.159043 | orchestrator | 2025-09-13 00:55:43.159048 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-09-13 00:55:43.159054 | orchestrator | Saturday 13 September 2025 00:55:03 +0000 (0:00:00.861) 0:05:46.527 **** 2025-09-13 00:55:43.159059 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:55:43.159064 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:55:43.159070 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:55:43.159075 | orchestrator | 2025-09-13 00:55:43.159080 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-09-13 00:55:43.159086 | orchestrator | Saturday 13 September 2025 00:55:04 +0000 (0:00:01.185) 0:05:47.713 **** 2025-09-13 00:55:43.159101 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:55:43.159107 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:55:43.159115 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:55:43.159120 | orchestrator | 2025-09-13 00:55:43.159126 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-09-13 00:55:43.159131 | orchestrator | Saturday 13 September 2025 00:55:05 +0000 (0:00:00.875) 0:05:48.588 **** 2025-09-13 00:55:43.159136 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.159142 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.159147 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.159153 | orchestrator | 2025-09-13 00:55:43.159158 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-09-13 00:55:43.159164 | orchestrator | Saturday 13 September 2025 00:55:13 +0000 (0:00:08.435) 0:05:57.024 **** 2025-09-13 00:55:43.159169 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:55:43.159175 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:55:43.159180 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:55:43.159185 | orchestrator | 2025-09-13 00:55:43.159191 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-09-13 00:55:43.159199 | orchestrator | Saturday 13 September 2025 00:55:14 +0000 (0:00:00.642) 0:05:57.666 **** 2025-09-13 00:55:43.159204 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.159210 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.159215 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.159221 | orchestrator | 2025-09-13 00:55:43.159226 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-09-13 00:55:43.159232 | orchestrator | Saturday 13 September 2025 00:55:27 +0000 (0:00:12.867) 0:06:10.534 **** 2025-09-13 00:55:43.159237 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:55:43.159242 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:55:43.159248 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:55:43.159253 | orchestrator | 2025-09-13 00:55:43.159259 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-09-13 00:55:43.159264 | orchestrator | Saturday 13 September 2025 00:55:28 +0000 (0:00:01.145) 0:06:11.680 **** 2025-09-13 00:55:43.159270 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:55:43.159275 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:55:43.159284 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:55:43.159290 | orchestrator | 2025-09-13 00:55:43.159295 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-09-13 00:55:43.159301 | orchestrator | Saturday 13 September 2025 00:55:32 +0000 (0:00:04.134) 0:06:15.814 **** 2025-09-13 00:55:43.159306 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.159312 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.159317 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.159322 | orchestrator | 2025-09-13 00:55:43.159328 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-09-13 00:55:43.159333 | orchestrator | Saturday 13 September 2025 00:55:32 +0000 (0:00:00.322) 0:06:16.136 **** 2025-09-13 00:55:43.159339 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.159344 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.159350 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.159355 | orchestrator | 2025-09-13 00:55:43.159361 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-09-13 00:55:43.159366 | orchestrator | Saturday 13 September 2025 00:55:33 +0000 (0:00:00.340) 0:06:16.477 **** 2025-09-13 00:55:43.159371 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.159377 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.159382 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.159387 | orchestrator | 2025-09-13 00:55:43.159393 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-09-13 00:55:43.159398 | orchestrator | Saturday 13 September 2025 00:55:33 +0000 (0:00:00.669) 0:06:17.146 **** 2025-09-13 00:55:43.159404 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.159409 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.159414 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.159420 | orchestrator | 2025-09-13 00:55:43.159425 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-09-13 00:55:43.159431 | orchestrator | Saturday 13 September 2025 00:55:34 +0000 (0:00:00.348) 0:06:17.495 **** 2025-09-13 00:55:43.159436 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.159442 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.159447 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.159452 | orchestrator | 2025-09-13 00:55:43.159458 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-09-13 00:55:43.159463 | orchestrator | Saturday 13 September 2025 00:55:34 +0000 (0:00:00.332) 0:06:17.827 **** 2025-09-13 00:55:43.159469 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:55:43.159474 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:55:43.159479 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:55:43.159485 | orchestrator | 2025-09-13 00:55:43.159490 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-09-13 00:55:43.159496 | orchestrator | Saturday 13 September 2025 00:55:34 +0000 (0:00:00.348) 0:06:18.176 **** 2025-09-13 00:55:43.159501 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:55:43.159506 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:55:43.159512 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:55:43.159517 | orchestrator | 2025-09-13 00:55:43.159523 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-09-13 00:55:43.159528 | orchestrator | Saturday 13 September 2025 00:55:39 +0000 (0:00:05.181) 0:06:23.358 **** 2025-09-13 00:55:43.159533 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:55:43.159539 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:55:43.159544 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:55:43.159550 | orchestrator | 2025-09-13 00:55:43.159555 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:55:43.159561 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-13 00:55:43.159566 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-13 00:55:43.159575 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-13 00:55:43.159581 | orchestrator | 2025-09-13 00:55:43.159586 | orchestrator | 2025-09-13 00:55:43.159594 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:55:43.159600 | orchestrator | Saturday 13 September 2025 00:55:40 +0000 (0:00:00.860) 0:06:24.218 **** 2025-09-13 00:55:43.159605 | orchestrator | =============================================================================== 2025-09-13 00:55:43.159610 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 12.87s 2025-09-13 00:55:43.159616 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.44s 2025-09-13 00:55:43.159621 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 7.66s 2025-09-13 00:55:43.159627 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.06s 2025-09-13 00:55:43.159632 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 5.18s 2025-09-13 00:55:43.159640 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.09s 2025-09-13 00:55:43.159646 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.97s 2025-09-13 00:55:43.159651 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.61s 2025-09-13 00:55:43.159656 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 4.61s 2025-09-13 00:55:43.159662 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.56s 2025-09-13 00:55:43.159667 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.46s 2025-09-13 00:55:43.159673 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.45s 2025-09-13 00:55:43.159678 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.43s 2025-09-13 00:55:43.159684 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.36s 2025-09-13 00:55:43.159689 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.24s 2025-09-13 00:55:43.159694 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.20s 2025-09-13 00:55:43.159700 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.13s 2025-09-13 00:55:43.159705 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.74s 2025-09-13 00:55:43.159711 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.72s 2025-09-13 00:55:43.159716 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 3.51s 2025-09-13 00:55:43.159722 | orchestrator | 2025-09-13 00:55:43 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:55:46.182924 | orchestrator | 2025-09-13 00:55:46 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:55:46.184726 | orchestrator | 2025-09-13 00:55:46 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:55:46.185407 | orchestrator | 2025-09-13 00:55:46 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:55:46.185433 | orchestrator | 2025-09-13 00:55:46 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:55:49.229961 | orchestrator | 2025-09-13 00:55:49 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:55:49.232276 | orchestrator | 2025-09-13 00:55:49 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:55:49.233128 | orchestrator | 2025-09-13 00:55:49 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:55:49.233243 | orchestrator | 2025-09-13 00:55:49 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:55:52.283546 | orchestrator | 2025-09-13 00:55:52 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:55:52.284253 | orchestrator | 2025-09-13 00:55:52 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:55:52.286101 | orchestrator | 2025-09-13 00:55:52 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:55:52.286124 | orchestrator | 2025-09-13 00:55:52 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:55:55.317662 | orchestrator | 2025-09-13 00:55:55 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:55:55.318213 | orchestrator | 2025-09-13 00:55:55 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:55:55.319121 | orchestrator | 2025-09-13 00:55:55 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:55:55.319267 | orchestrator | 2025-09-13 00:55:55 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:55:58.355259 | orchestrator | 2025-09-13 00:55:58 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:55:58.355968 | orchestrator | 2025-09-13 00:55:58 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:55:58.356906 | orchestrator | 2025-09-13 00:55:58 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:55:58.357995 | orchestrator | 2025-09-13 00:55:58 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:56:01.391001 | orchestrator | 2025-09-13 00:56:01 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:56:01.391554 | orchestrator | 2025-09-13 00:56:01 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:56:01.392581 | orchestrator | 2025-09-13 00:56:01 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:56:01.392604 | orchestrator | 2025-09-13 00:56:01 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:56:04.424011 | orchestrator | 2025-09-13 00:56:04 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:56:04.424826 | orchestrator | 2025-09-13 00:56:04 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:56:04.426374 | orchestrator | 2025-09-13 00:56:04 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:56:04.426402 | orchestrator | 2025-09-13 00:56:04 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:56:07.459274 | orchestrator | 2025-09-13 00:56:07 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:56:07.459373 | orchestrator | 2025-09-13 00:56:07 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:56:07.459831 | orchestrator | 2025-09-13 00:56:07 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:56:07.459855 | orchestrator | 2025-09-13 00:56:07 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:56:10.495042 | orchestrator | 2025-09-13 00:56:10 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:56:10.496921 | orchestrator | 2025-09-13 00:56:10 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:56:10.498697 | orchestrator | 2025-09-13 00:56:10 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:56:10.498730 | orchestrator | 2025-09-13 00:56:10 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:56:13.541203 | orchestrator | 2025-09-13 00:56:13 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:56:13.544928 | orchestrator | 2025-09-13 00:56:13 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:56:13.547896 | orchestrator | 2025-09-13 00:56:13 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:56:13.547924 | orchestrator | 2025-09-13 00:56:13 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:56:16.594332 | orchestrator | 2025-09-13 00:56:16 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:56:16.594889 | orchestrator | 2025-09-13 00:56:16 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:56:16.596285 | orchestrator | 2025-09-13 00:56:16 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:56:16.596308 | orchestrator | 2025-09-13 00:56:16 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:56:19.633245 | orchestrator | 2025-09-13 00:56:19 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:56:19.634209 | orchestrator | 2025-09-13 00:56:19 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:56:19.636057 | orchestrator | 2025-09-13 00:56:19 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:56:19.636770 | orchestrator | 2025-09-13 00:56:19 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:56:22.688601 | orchestrator | 2025-09-13 00:56:22 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:56:22.690837 | orchestrator | 2025-09-13 00:56:22 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:56:22.692834 | orchestrator | 2025-09-13 00:56:22 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:56:22.693447 | orchestrator | 2025-09-13 00:56:22 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:56:25.748252 | orchestrator | 2025-09-13 00:56:25 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:56:25.748361 | orchestrator | 2025-09-13 00:56:25 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:56:25.749991 | orchestrator | 2025-09-13 00:56:25 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:56:25.750090 | orchestrator | 2025-09-13 00:56:25 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:56:28.800946 | orchestrator | 2025-09-13 00:56:28 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:56:28.801920 | orchestrator | 2025-09-13 00:56:28 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:56:28.805476 | orchestrator | 2025-09-13 00:56:28 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:56:28.805501 | orchestrator | 2025-09-13 00:56:28 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:56:31.844971 | orchestrator | 2025-09-13 00:56:31 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:56:31.849255 | orchestrator | 2025-09-13 00:56:31 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:56:31.851337 | orchestrator | 2025-09-13 00:56:31 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:56:31.851778 | orchestrator | 2025-09-13 00:56:31 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:56:34.889135 | orchestrator | 2025-09-13 00:56:34 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:56:34.892416 | orchestrator | 2025-09-13 00:56:34 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:56:34.895790 | orchestrator | 2025-09-13 00:56:34 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:56:34.896052 | orchestrator | 2025-09-13 00:56:34 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:56:37.948569 | orchestrator | 2025-09-13 00:56:37 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:56:37.950121 | orchestrator | 2025-09-13 00:56:37 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:56:37.951367 | orchestrator | 2025-09-13 00:56:37 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:56:37.951932 | orchestrator | 2025-09-13 00:56:37 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:56:40.994145 | orchestrator | 2025-09-13 00:56:40 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:56:40.994788 | orchestrator | 2025-09-13 00:56:40 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:56:40.995592 | orchestrator | 2025-09-13 00:56:40 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:56:40.995615 | orchestrator | 2025-09-13 00:56:40 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:56:44.043091 | orchestrator | 2025-09-13 00:56:44 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:56:44.044792 | orchestrator | 2025-09-13 00:56:44 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:56:44.046528 | orchestrator | 2025-09-13 00:56:44 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:56:44.046567 | orchestrator | 2025-09-13 00:56:44 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:56:47.108479 | orchestrator | 2025-09-13 00:56:47 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:56:47.110121 | orchestrator | 2025-09-13 00:56:47 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:56:47.112070 | orchestrator | 2025-09-13 00:56:47 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:56:47.112106 | orchestrator | 2025-09-13 00:56:47 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:56:50.161022 | orchestrator | 2025-09-13 00:56:50 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:56:50.163837 | orchestrator | 2025-09-13 00:56:50 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:56:50.166479 | orchestrator | 2025-09-13 00:56:50 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:56:50.166515 | orchestrator | 2025-09-13 00:56:50 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:56:53.222349 | orchestrator | 2025-09-13 00:56:53 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:56:53.224156 | orchestrator | 2025-09-13 00:56:53 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:56:53.225439 | orchestrator | 2025-09-13 00:56:53 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:56:53.225601 | orchestrator | 2025-09-13 00:56:53 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:56:56.280087 | orchestrator | 2025-09-13 00:56:56 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:56:56.282767 | orchestrator | 2025-09-13 00:56:56 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:56:56.284401 | orchestrator | 2025-09-13 00:56:56 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:56:56.284557 | orchestrator | 2025-09-13 00:56:56 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:56:59.342972 | orchestrator | 2025-09-13 00:56:59 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:56:59.343568 | orchestrator | 2025-09-13 00:56:59 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:56:59.346302 | orchestrator | 2025-09-13 00:56:59 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:56:59.346330 | orchestrator | 2025-09-13 00:56:59 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:57:02.392514 | orchestrator | 2025-09-13 00:57:02 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:57:02.393974 | orchestrator | 2025-09-13 00:57:02 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:57:02.396290 | orchestrator | 2025-09-13 00:57:02 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:57:02.396883 | orchestrator | 2025-09-13 00:57:02 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:57:05.451847 | orchestrator | 2025-09-13 00:57:05 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:57:05.453481 | orchestrator | 2025-09-13 00:57:05 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:57:05.455247 | orchestrator | 2025-09-13 00:57:05 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:57:05.455272 | orchestrator | 2025-09-13 00:57:05 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:57:08.495631 | orchestrator | 2025-09-13 00:57:08 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:57:08.497516 | orchestrator | 2025-09-13 00:57:08 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:57:08.498200 | orchestrator | 2025-09-13 00:57:08 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:57:08.498227 | orchestrator | 2025-09-13 00:57:08 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:57:11.544020 | orchestrator | 2025-09-13 00:57:11 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:57:11.545606 | orchestrator | 2025-09-13 00:57:11 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:57:11.547479 | orchestrator | 2025-09-13 00:57:11 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:57:11.547685 | orchestrator | 2025-09-13 00:57:11 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:57:14.595209 | orchestrator | 2025-09-13 00:57:14 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:57:14.598133 | orchestrator | 2025-09-13 00:57:14 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:57:14.599374 | orchestrator | 2025-09-13 00:57:14 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:57:14.599405 | orchestrator | 2025-09-13 00:57:14 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:57:17.645185 | orchestrator | 2025-09-13 00:57:17 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:57:17.647738 | orchestrator | 2025-09-13 00:57:17 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:57:17.650584 | orchestrator | 2025-09-13 00:57:17 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:57:17.650638 | orchestrator | 2025-09-13 00:57:17 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:57:20.697247 | orchestrator | 2025-09-13 00:57:20 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:57:20.698624 | orchestrator | 2025-09-13 00:57:20 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:57:20.701362 | orchestrator | 2025-09-13 00:57:20 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:57:20.701765 | orchestrator | 2025-09-13 00:57:20 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:57:23.739340 | orchestrator | 2025-09-13 00:57:23 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:57:23.741716 | orchestrator | 2025-09-13 00:57:23 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:57:23.744241 | orchestrator | 2025-09-13 00:57:23 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:57:23.744364 | orchestrator | 2025-09-13 00:57:23 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:57:26.797982 | orchestrator | 2025-09-13 00:57:26 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:57:26.800282 | orchestrator | 2025-09-13 00:57:26 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:57:26.802223 | orchestrator | 2025-09-13 00:57:26 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:57:26.802249 | orchestrator | 2025-09-13 00:57:26 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:57:29.852416 | orchestrator | 2025-09-13 00:57:29 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:57:29.853589 | orchestrator | 2025-09-13 00:57:29 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:57:29.856661 | orchestrator | 2025-09-13 00:57:29 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:57:29.856697 | orchestrator | 2025-09-13 00:57:29 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:57:32.909340 | orchestrator | 2025-09-13 00:57:32 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:57:32.914726 | orchestrator | 2025-09-13 00:57:32 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:57:32.916382 | orchestrator | 2025-09-13 00:57:32 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:57:32.916426 | orchestrator | 2025-09-13 00:57:32 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:57:35.961861 | orchestrator | 2025-09-13 00:57:35 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:57:35.963999 | orchestrator | 2025-09-13 00:57:35 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:57:35.965950 | orchestrator | 2025-09-13 00:57:35 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:57:35.966129 | orchestrator | 2025-09-13 00:57:35 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:57:39.020698 | orchestrator | 2025-09-13 00:57:39 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:57:39.020797 | orchestrator | 2025-09-13 00:57:39 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:57:39.026556 | orchestrator | 2025-09-13 00:57:39 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:57:39.026591 | orchestrator | 2025-09-13 00:57:39 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:57:42.074090 | orchestrator | 2025-09-13 00:57:42 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:57:42.074999 | orchestrator | 2025-09-13 00:57:42 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:57:42.076011 | orchestrator | 2025-09-13 00:57:42 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state STARTED 2025-09-13 00:57:42.077301 | orchestrator | 2025-09-13 00:57:42 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:57:45.135900 | orchestrator | 2025-09-13 00:57:45 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:57:45.136951 | orchestrator | 2025-09-13 00:57:45 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:57:45.138482 | orchestrator | 2025-09-13 00:57:45 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:57:45.144994 | orchestrator | 2025-09-13 00:57:45 | INFO  | Task 62f18e82-079c-4104-b0ed-61f83504a4d6 is in state SUCCESS 2025-09-13 00:57:45.150202 | orchestrator | 2025-09-13 00:57:45.150283 | orchestrator | 2025-09-13 00:57:45.150298 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-09-13 00:57:45.150312 | orchestrator | 2025-09-13 00:57:45.150323 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-13 00:57:45.150335 | orchestrator | Saturday 13 September 2025 00:46:29 +0000 (0:00:00.864) 0:00:00.864 **** 2025-09-13 00:57:45.150347 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:57:45.150360 | orchestrator | 2025-09-13 00:57:45.150370 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-13 00:57:45.150381 | orchestrator | Saturday 13 September 2025 00:46:30 +0000 (0:00:01.065) 0:00:01.930 **** 2025-09-13 00:57:45.150392 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.150404 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.150415 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.150425 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.150436 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.150447 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.150457 | orchestrator | 2025-09-13 00:57:45.150469 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-13 00:57:45.150480 | orchestrator | Saturday 13 September 2025 00:46:32 +0000 (0:00:01.741) 0:00:03.672 **** 2025-09-13 00:57:45.150491 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.150502 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.150512 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.150523 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.150534 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.150544 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.150555 | orchestrator | 2025-09-13 00:57:45.150565 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-13 00:57:45.150576 | orchestrator | Saturday 13 September 2025 00:46:33 +0000 (0:00:00.901) 0:00:04.573 **** 2025-09-13 00:57:45.150587 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.150598 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.150608 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.150619 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.150629 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.150640 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.150651 | orchestrator | 2025-09-13 00:57:45.150662 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-13 00:57:45.150673 | orchestrator | Saturday 13 September 2025 00:46:34 +0000 (0:00:01.308) 0:00:05.882 **** 2025-09-13 00:57:45.150683 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.150694 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.150736 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.150747 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.150800 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.150813 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.150823 | orchestrator | 2025-09-13 00:57:45.150834 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-13 00:57:45.150845 | orchestrator | Saturday 13 September 2025 00:46:35 +0000 (0:00:00.793) 0:00:06.676 **** 2025-09-13 00:57:45.150880 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.150892 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.150903 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.150914 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.150924 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.150935 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.150945 | orchestrator | 2025-09-13 00:57:45.150957 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-13 00:57:45.150967 | orchestrator | Saturday 13 September 2025 00:46:36 +0000 (0:00:00.924) 0:00:07.601 **** 2025-09-13 00:57:45.150978 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.150989 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.150999 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.151010 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.151020 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.151031 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.151041 | orchestrator | 2025-09-13 00:57:45.151052 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-13 00:57:45.151063 | orchestrator | Saturday 13 September 2025 00:46:37 +0000 (0:00:00.866) 0:00:08.467 **** 2025-09-13 00:57:45.151074 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.151086 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.151097 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.151107 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.151118 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.151129 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.151139 | orchestrator | 2025-09-13 00:57:45.151150 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-13 00:57:45.151161 | orchestrator | Saturday 13 September 2025 00:46:38 +0000 (0:00:00.986) 0:00:09.453 **** 2025-09-13 00:57:45.151172 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.151182 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.151193 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.151204 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.151214 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.151225 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.151235 | orchestrator | 2025-09-13 00:57:45.151246 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-13 00:57:45.151257 | orchestrator | Saturday 13 September 2025 00:46:40 +0000 (0:00:01.756) 0:00:11.210 **** 2025-09-13 00:57:45.151268 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-13 00:57:45.151279 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-13 00:57:45.151289 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-13 00:57:45.151300 | orchestrator | 2025-09-13 00:57:45.151311 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-13 00:57:45.151322 | orchestrator | Saturday 13 September 2025 00:46:41 +0000 (0:00:01.272) 0:00:12.483 **** 2025-09-13 00:57:45.151332 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.151343 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.151354 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.151364 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.151375 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.151385 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.151396 | orchestrator | 2025-09-13 00:57:45.151436 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-13 00:57:45.151458 | orchestrator | Saturday 13 September 2025 00:46:43 +0000 (0:00:02.510) 0:00:14.994 **** 2025-09-13 00:57:45.151469 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-13 00:57:45.151480 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-13 00:57:45.151491 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-13 00:57:45.151502 | orchestrator | 2025-09-13 00:57:45.151513 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-13 00:57:45.151524 | orchestrator | Saturday 13 September 2025 00:46:47 +0000 (0:00:03.986) 0:00:18.980 **** 2025-09-13 00:57:45.151535 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-13 00:57:45.151546 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-13 00:57:45.151557 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-13 00:57:45.151567 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.151578 | orchestrator | 2025-09-13 00:57:45.151589 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-13 00:57:45.151605 | orchestrator | Saturday 13 September 2025 00:46:49 +0000 (0:00:01.392) 0:00:20.372 **** 2025-09-13 00:57:45.151618 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.151632 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.151643 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.151654 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.151665 | orchestrator | 2025-09-13 00:57:45.151676 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-13 00:57:45.151687 | orchestrator | Saturday 13 September 2025 00:46:50 +0000 (0:00:01.422) 0:00:21.795 **** 2025-09-13 00:57:45.151700 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.151713 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.151724 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.151735 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.151746 | orchestrator | 2025-09-13 00:57:45.151757 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-13 00:57:45.151768 | orchestrator | Saturday 13 September 2025 00:46:51 +0000 (0:00:00.272) 0:00:22.068 **** 2025-09-13 00:57:45.151799 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-13 00:46:44.886751', 'end': '2025-09-13 00:46:45.153943', 'delta': '0:00:00.267192', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.151814 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-13 00:46:45.929882', 'end': '2025-09-13 00:46:46.229469', 'delta': '0:00:00.299587', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.151831 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-13 00:46:47.028805', 'end': '2025-09-13 00:46:47.308443', 'delta': '0:00:00.279638', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.151842 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.151873 | orchestrator | 2025-09-13 00:57:45.151885 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-13 00:57:45.151896 | orchestrator | Saturday 13 September 2025 00:46:52 +0000 (0:00:01.139) 0:00:23.207 **** 2025-09-13 00:57:45.151907 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.151918 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.151929 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.151940 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.151951 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.151961 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.151972 | orchestrator | 2025-09-13 00:57:45.151983 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-13 00:57:45.151994 | orchestrator | Saturday 13 September 2025 00:46:55 +0000 (0:00:03.112) 0:00:26.319 **** 2025-09-13 00:57:45.152005 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-13 00:57:45.152016 | orchestrator | 2025-09-13 00:57:45.152027 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-13 00:57:45.152038 | orchestrator | Saturday 13 September 2025 00:46:56 +0000 (0:00:01.203) 0:00:27.523 **** 2025-09-13 00:57:45.152049 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.152060 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.152070 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.152081 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.152092 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.152103 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.152114 | orchestrator | 2025-09-13 00:57:45.152125 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-13 00:57:45.152143 | orchestrator | Saturday 13 September 2025 00:46:58 +0000 (0:00:01.920) 0:00:29.443 **** 2025-09-13 00:57:45.152154 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.152165 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.152176 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.152187 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.152198 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.152209 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.152219 | orchestrator | 2025-09-13 00:57:45.152230 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-13 00:57:45.152242 | orchestrator | Saturday 13 September 2025 00:46:59 +0000 (0:00:01.494) 0:00:30.938 **** 2025-09-13 00:57:45.152252 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.152263 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.152274 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.152285 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.152295 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.152306 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.152317 | orchestrator | 2025-09-13 00:57:45.152328 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-13 00:57:45.152339 | orchestrator | Saturday 13 September 2025 00:47:00 +0000 (0:00:00.961) 0:00:31.899 **** 2025-09-13 00:57:45.152350 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.152361 | orchestrator | 2025-09-13 00:57:45.152372 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-13 00:57:45.152382 | orchestrator | Saturday 13 September 2025 00:47:00 +0000 (0:00:00.080) 0:00:31.980 **** 2025-09-13 00:57:45.152393 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.152404 | orchestrator | 2025-09-13 00:57:45.152415 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-13 00:57:45.152426 | orchestrator | Saturday 13 September 2025 00:47:01 +0000 (0:00:00.193) 0:00:32.173 **** 2025-09-13 00:57:45.152437 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.152448 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.152459 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.152470 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.152481 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.152491 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.152502 | orchestrator | 2025-09-13 00:57:45.152519 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-13 00:57:45.152531 | orchestrator | Saturday 13 September 2025 00:47:01 +0000 (0:00:00.668) 0:00:32.841 **** 2025-09-13 00:57:45.152541 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.152552 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.152563 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.152574 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.152584 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.152595 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.152606 | orchestrator | 2025-09-13 00:57:45.152616 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-13 00:57:45.152627 | orchestrator | Saturday 13 September 2025 00:47:02 +0000 (0:00:01.118) 0:00:33.960 **** 2025-09-13 00:57:45.152638 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.152649 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.152659 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.152670 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.152680 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.152691 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.152702 | orchestrator | 2025-09-13 00:57:45.152712 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-13 00:57:45.152723 | orchestrator | Saturday 13 September 2025 00:47:03 +0000 (0:00:00.778) 0:00:34.739 **** 2025-09-13 00:57:45.152739 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.152756 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.152767 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.152778 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.152788 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.152799 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.152810 | orchestrator | 2025-09-13 00:57:45.152820 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-13 00:57:45.152831 | orchestrator | Saturday 13 September 2025 00:47:04 +0000 (0:00:00.754) 0:00:35.494 **** 2025-09-13 00:57:45.152842 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.152866 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.152878 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.152888 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.152899 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.152910 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.152921 | orchestrator | 2025-09-13 00:57:45.152931 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-13 00:57:45.152942 | orchestrator | Saturday 13 September 2025 00:47:05 +0000 (0:00:00.639) 0:00:36.134 **** 2025-09-13 00:57:45.152953 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.152963 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.152974 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.152985 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.152995 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.153006 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.153017 | orchestrator | 2025-09-13 00:57:45.153028 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-13 00:57:45.153039 | orchestrator | Saturday 13 September 2025 00:47:05 +0000 (0:00:00.837) 0:00:36.971 **** 2025-09-13 00:57:45.153049 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.153060 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.153071 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.153081 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.153092 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.153103 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.153113 | orchestrator | 2025-09-13 00:57:45.153124 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-13 00:57:45.153135 | orchestrator | Saturday 13 September 2025 00:47:06 +0000 (0:00:00.945) 0:00:37.917 **** 2025-09-13 00:57:45.153147 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--741132e6--4e77--5ad5--aab1--a12c98657a1e-osd--block--741132e6--4e77--5ad5--aab1--a12c98657a1e', 'dm-uuid-LVM-jJI5DDIpu0EItbMCyD70C1YVS3RuFgkIDzpp3s6Tq8hGjqWaSBzuz7Maducd3XlY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153159 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c9c3f5f4--a401--5886--82fa--33c7ca08590f-osd--block--c9c3f5f4--a401--5886--82fa--33c7ca08590f', 'dm-uuid-LVM-VOGzGOt7N2MJGxjnyWXZl4x2rYodV1SMq74bPzX15UowmcKMO670XD4LKiQ0PgHi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153178 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153196 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153208 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153224 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153235 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153246 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153258 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153269 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d', 'scsi-SQEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d-part1', 'scsi-SQEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d-part14', 'scsi-SQEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d-part15', 'scsi-SQEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d-part16', 'scsi-SQEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-13 00:57:45.153321 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--741132e6--4e77--5ad5--aab1--a12c98657a1e-osd--block--741132e6--4e77--5ad5--aab1--a12c98657a1e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9R2x89-CdLU-slGE-dXAv-GI8t-5WOV-d3W3gk', 'scsi-0QEMU_QEMU_HARDDISK_6e724704-b413-40a8-af93-f723a1c0b62f', 'scsi-SQEMU_QEMU_HARDDISK_6e724704-b413-40a8-af93-f723a1c0b62f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-13 00:57:45.153335 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c9c3f5f4--a401--5886--82fa--33c7ca08590f-osd--block--c9c3f5f4--a401--5886--82fa--33c7ca08590f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5RJTwx-hdJX-JGy3-lgIp-xI98-Pe6c-cJpC7g', 'scsi-0QEMU_QEMU_HARDDISK_e25c372e-2cb9-47f6-a0c5-1defd25ac71c', 'scsi-SQEMU_QEMU_HARDDISK_e25c372e-2cb9-47f6-a0c5-1defd25ac71c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-13 00:57:45.153347 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c46d17e-adbc-49dd-8bd7-8befc745e964', 'scsi-SQEMU_QEMU_HARDDISK_0c46d17e-adbc-49dd-8bd7-8befc745e964'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-13 00:57:45.153359 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-13-00-02-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-13 00:57:45.153383 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b9d4bd55--4398--5073--b181--64dcd216e500-osd--block--b9d4bd55--4398--5073--b181--64dcd216e500', 'dm-uuid-LVM-1qtX0Jo6rJTSVRgewMZPqsBZ847hNk4286JrwLgbQ49IsWeBUP1OvwJlrIeXA7Ip'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b087737a--96b5--5170--ab1c--c312068a0bca-osd--block--b087737a--96b5--5170--ab1c--c312068a0bca', 'dm-uuid-LVM-4ziM4yN1AYFDpeajDs2cz5TdsO0MUbtaO80VVHJZIgmxFhRQYU2eEdcQX9mZa8AX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153416 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4283f495--c022--53d0--a3fe--4c36d70cad8f-osd--block--4283f495--c022--53d0--a3fe--4c36d70cad8f', 'dm-uuid-LVM-kp4Dl7y4fqNhI87RlNBmLiywBxnqlzkWBTHJoQKzLiT8HpT3RobzW8ESzX9IOpdT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153428 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7339ba9f--b6a9--52d7--bde1--e21ae438ff7a-osd--block--7339ba9f--b6a9--52d7--bde1--e21ae438ff7a', 'dm-uuid-LVM-kO2GP93cMpGUHtYUCcev57k9Af2LcWSdZzKCYwYU57czijTAO3e8T0YNHLLhZcxe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153439 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153451 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153462 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153481 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.153493 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153510 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153521 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153537 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153549 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153559 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153570 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153592 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153638 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96', 'scsi-SQEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96-part1', 'scsi-SQEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96-part14', 'scsi-SQEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96-part15', 'scsi-SQEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96-part16', 'scsi-SQEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-13 00:57:45.153651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153663 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4283f495--c022--53d0--a3fe--4c36d70cad8f-osd--block--4283f495--c022--53d0--a3fe--4c36d70cad8f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BTPet6-ys3j-9eIw-fFqX-JJfw-Xj04-Wo18Tl', 'scsi-0QEMU_QEMU_HARDDISK_1763dbba-d504-4b6d-865a-93cad2d65fc8', 'scsi-SQEMU_QEMU_HARDDISK_1763dbba-d504-4b6d-865a-93cad2d65fc8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-13 00:57:45.153675 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--7339ba9f--b6a9--52d7--bde1--e21ae438ff7a-osd--block--7339ba9f--b6a9--52d7--bde1--e21ae438ff7a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Vbpmmt-FNR2-sy7Q-zbki-Miq7-xgRc-ePYo6O', 'scsi-0QEMU_QEMU_HARDDISK_c5da3e8c-99b7-4761-a17c-7637f0eb6556', 'scsi-SQEMU_QEMU_HARDDISK_c5da3e8c-99b7-4761-a17c-7637f0eb6556'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-13 00:57:45.153693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153711 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153738 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9346358d-8291-41dd-be96-0d8c84c54113', 'scsi-SQEMU_QEMU_HARDDISK_9346358d-8291-41dd-be96-0d8c84c54113'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-13 00:57:45.153750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153761 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153772 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-13-00-01-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-13 00:57:45.153791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153803 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153831 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153842 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.153874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_868ff441-ad0d-4310-969c-d766af5d9c20', 'scsi-SQEMU_QEMU_HARDDISK_868ff441-ad0d-4310-969c-d766af5d9c20'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_868ff441-ad0d-4310-969c-d766af5d9c20-part1', 'scsi-SQEMU_QEMU_HARDDISK_868ff441-ad0d-4310-969c-d766af5d9c20-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_868ff441-ad0d-4310-969c-d766af5d9c20-part14', 'scsi-SQEMU_QEMU_HARDDISK_868ff441-ad0d-4310-969c-d766af5d9c20-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_868ff441-ad0d-4310-969c-d766af5d9c20-part15', 'scsi-SQEMU_QEMU_HARDDISK_868ff441-ad0d-4310-969c-d766af5d9c20-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_868ff441-ad0d-4310-969c-d766af5d9c20-part16', 'scsi-SQEMU_QEMU_HARDDISK_868ff441-ad0d-4310-969c-d766af5d9c20-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-13 00:57:45.153894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-13-00-02-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-13 00:57:45.153920 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8', 'scsi-SQEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-13 00:57:45.153933 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b9d4bd55--4398--5073--b181--64dcd216e500-osd--block--b9d4bd55--4398--5073--b181--64dcd216e500'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wgTo8x-jTlS-kgz9-0kDJ-DbDW-rvkH-juQByI', 'scsi-0QEMU_QEMU_HARDDISK_e924364d-2e91-46ce-bd4b-cca5d229d1e6', 'scsi-SQEMU_QEMU_HARDDISK_e924364d-2e91-46ce-bd4b-cca5d229d1e6'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-13 00:57:45.153945 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b087737a--96b5--5170--ab1c--c312068a0bca-osd--block--b087737a--96b5--5170--ab1c--c312068a0bca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qZWD3J-GcDo-qoZV-e3Jd-h0uy-CDo5-lQ9w0F', 'scsi-0QEMU_QEMU_HARDDISK_f868cbab-65ba-4325-b003-03d97073cddb', 'scsi-SQEMU_QEMU_HARDDISK_f868cbab-65ba-4325-b003-03d97073cddb'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-13 00:57:45.153962 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a3f219a-02e3-456c-9d7f-0c5a8049cd2b', 'scsi-SQEMU_QEMU_HARDDISK_5a3f219a-02e3-456c-9d7f-0c5a8049cd2b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-13 00:57:45.153979 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-13-00-02-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-13 00:57:45.153991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.154007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.154063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.154078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.154090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.154108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.154119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.154130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.154156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae0b0158-ec1f-45de-80c9-2bee6f7c9d63', 'scsi-SQEMU_QEMU_HARDDISK_ae0b0158-ec1f-45de-80c9-2bee6f7c9d63'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae0b0158-ec1f-45de-80c9-2bee6f7c9d63-part1', 'scsi-SQEMU_QEMU_HARDDISK_ae0b0158-ec1f-45de-80c9-2bee6f7c9d63-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae0b0158-ec1f-45de-80c9-2bee6f7c9d63-part14', 'scsi-SQEMU_QEMU_HARDDISK_ae0b0158-ec1f-45de-80c9-2bee6f7c9d63-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae0b0158-ec1f-45de-80c9-2bee6f7c9d63-part15', 'scsi-SQEMU_QEMU_HARDDISK_ae0b0158-ec1f-45de-80c9-2bee6f7c9d63-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae0b0158-ec1f-45de-80c9-2bee6f7c9d63-part16', 'scsi-SQEMU_QEMU_HARDDISK_ae0b0158-ec1f-45de-80c9-2bee6f7c9d63-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-13 00:57:45.154169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-13-00-02-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-13 00:57:45.154190 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.154202 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.154213 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.154223 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.154234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.154246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.154257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.154268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.154293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.154305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.154322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.154333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:57:45.154345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8be299e-6f26-4fcd-9ad7-d2c8303193a1', 'scsi-SQEMU_QEMU_HARDDISK_e8be299e-6f26-4fcd-9ad7-d2c8303193a1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8be299e-6f26-4fcd-9ad7-d2c8303193a1-part1', 'scsi-SQEMU_QEMU_HARDDISK_e8be299e-6f26-4fcd-9ad7-d2c8303193a1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8be299e-6f26-4fcd-9ad7-d2c8303193a1-part14', 'scsi-SQEMU_QEMU_HARDDISK_e8be299e-6f26-4fcd-9ad7-d2c8303193a1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8be299e-6f26-4fcd-9ad7-d2c8303193a1-part15', 'scsi-SQEMU_QEMU_HARDDISK_e8be299e-6f26-4fcd-9ad7-d2c8303193a1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8be299e-6f26-4fcd-9ad7-d2c8303193a1-part16', 'scsi-SQEMU_QEMU_HARDDISK_e8be299e-6f26-4fcd-9ad7-d2c8303193a1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-13 00:57:45.154371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-13-00-02-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-13 00:57:45.154383 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.154394 | orchestrator | 2025-09-13 00:57:45.154405 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-13 00:57:45.154416 | orchestrator | Saturday 13 September 2025 00:47:09 +0000 (0:00:02.106) 0:00:40.024 **** 2025-09-13 00:57:45.154433 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--741132e6--4e77--5ad5--aab1--a12c98657a1e-osd--block--741132e6--4e77--5ad5--aab1--a12c98657a1e', 'dm-uuid-LVM-jJI5DDIpu0EItbMCyD70C1YVS3RuFgkIDzpp3s6Tq8hGjqWaSBzuz7Maducd3XlY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.154446 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c9c3f5f4--a401--5886--82fa--33c7ca08590f-osd--block--c9c3f5f4--a401--5886--82fa--33c7ca08590f', 'dm-uuid-LVM-VOGzGOt7N2MJGxjnyWXZl4x2rYodV1SMq74bPzX15UowmcKMO670XD4LKiQ0PgHi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.154464 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.154476 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.154487 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.154504 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.154516 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.154532 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.154550 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.154561 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.154581 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d', 'scsi-SQEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d-part1', 'scsi-SQEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d-part14', 'scsi-SQEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d-part15', 'scsi-SQEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d-part16', 'scsi-SQEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.154600 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--741132e6--4e77--5ad5--aab1--a12c98657a1e-osd--block--741132e6--4e77--5ad5--aab1--a12c98657a1e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9R2x89-CdLU-slGE-dXAv-GI8t-5WOV-d3W3gk', 'scsi-0QEMU_QEMU_HARDDISK_6e724704-b413-40a8-af93-f723a1c0b62f', 'scsi-SQEMU_QEMU_HARDDISK_6e724704-b413-40a8-af93-f723a1c0b62f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.154619 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c9c3f5f4--a401--5886--82fa--33c7ca08590f-osd--block--c9c3f5f4--a401--5886--82fa--33c7ca08590f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5RJTwx-hdJX-JGy3-lgIp-xI98-Pe6c-cJpC7g', 'scsi-0QEMU_QEMU_HARDDISK_e25c372e-2cb9-47f6-a0c5-1defd25ac71c', 'scsi-SQEMU_QEMU_HARDDISK_e25c372e-2cb9-47f6-a0c5-1defd25ac71c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.154631 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c46d17e-adbc-49dd-8bd7-8befc745e964', 'scsi-SQEMU_QEMU_HARDDISK_0c46d17e-adbc-49dd-8bd7-8befc745e964'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.154649 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-13-00-02-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.154661 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.154677 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b9d4bd55--4398--5073--b181--64dcd216e500-osd--block--b9d4bd55--4398--5073--b181--64dcd216e500', 'dm-uuid-LVM-1qtX0Jo6rJTSVRgewMZPqsBZ847hNk4286JrwLgbQ49IsWeBUP1OvwJlrIeXA7Ip'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.154695 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b087737a--96b5--5170--ab1c--c312068a0bca-osd--block--b087737a--96b5--5170--ab1c--c312068a0bca', 'dm-uuid-LVM-4ziM4yN1AYFDpeajDs2cz5TdsO0MUbtaO80VVHJZIgmxFhRQYU2eEdcQX9mZa8AX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.154706 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.154718 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.154729 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.154746 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.154758 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.154774 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.154792 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.154803 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.154815 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4283f495--c022--53d0--a3fe--4c36d70cad8f-osd--block--4283f495--c022--53d0--a3fe--4c36d70cad8f', 'dm-uuid-LVM-kp4Dl7y4fqNhI87RlNBmLiywBxnqlzkWBTHJoQKzLiT8HpT3RobzW8ESzX9IOpdT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.154841 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8', 'scsi-SQEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.154880 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--b9d4bd55--4398--5073--b181--64dcd216e500-osd--block--b9d4bd55--4398--5073--b181--64dcd216e500'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wgTo8x-jTlS-kgz9-0kDJ-DbDW-rvkH-juQByI', 'scsi-0QEMU_QEMU_HARDDISK_e924364d-2e91-46ce-bd4b-cca5d229d1e6', 'scsi-SQEMU_QEMU_HARDDISK_e924364d-2e91-46ce-bd4b-cca5d229d1e6'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.154893 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7339ba9f--b6a9--52d7--bde1--e21ae438ff7a-osd--block--7339ba9f--b6a9--52d7--bde1--e21ae438ff7a', 'dm-uuid-LVM-kO2GP93cMpGUHtYUCcev57k9Af2LcWSdZzKCYwYU57czijTAO3e8T0YNHLLhZcxe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.154911 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b087737a--96b5--5170--ab1c--c312068a0bca-osd--block--b087737a--96b5--5170--ab1c--c312068a0bca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qZWD3J-GcDo-qoZV-e3Jd-h0uy-CDo5-lQ9w0F', 'scsi-0QEMU_QEMU_HARDDISK_f868cbab-65ba-4325-b003-03d97073cddb', 'scsi-SQEMU_QEMU_HARDDISK_f868cbab-65ba-4325-b003-03d97073cddb'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.154923 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.154950 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a3f219a-02e3-456c-9d7f-0c5a8049cd2b', 'scsi-SQEMU_QEMU_HARDDISK_5a3f219a-02e3-456c-9d7f-0c5a8049cd2b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.154961 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-13-00-02-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.154973 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.154984 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155002 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155013 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155036 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155047 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155059 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155078 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96', 'scsi-SQEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96-part1', 'scsi-SQEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96-part14', 'scsi-SQEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96-part15', 'scsi-SQEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96-part16', 'scsi-SQEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155102 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4283f495--c022--53d0--a3fe--4c36d70cad8f-osd--block--4283f495--c022--53d0--a3fe--4c36d70cad8f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BTPet6-ys3j-9eIw-fFqX-JJfw-Xj04-Wo18Tl', 'scsi-0QEMU_QEMU_HARDDISK_1763dbba-d504-4b6d-865a-93cad2d65fc8', 'scsi-SQEMU_QEMU_HARDDISK_1763dbba-d504-4b6d-865a-93cad2d65fc8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155114 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--7339ba9f--b6a9--52d7--bde1--e21ae438ff7a-osd--block--7339ba9f--b6a9--52d7--bde1--e21ae438ff7a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Vbpmmt-FNR2-sy7Q-zbki-Miq7-xgRc-ePYo6O', 'scsi-0QEMU_QEMU_HARDDISK_c5da3e8c-99b7-4761-a17c-7637f0eb6556', 'scsi-SQEMU_QEMU_HARDDISK_c5da3e8c-99b7-4761-a17c-7637f0eb6556'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155125 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9346358d-8291-41dd-be96-0d8c84c54113', 'scsi-SQEMU_QEMU_HARDDISK_9346358d-8291-41dd-be96-0d8c84c54113'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155143 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-13-00-01-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155155 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.155172 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155189 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155200 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155211 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155223 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155235 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155253 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155275 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155288 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_868ff441-ad0d-4310-969c-d766af5d9c20', 'scsi-SQEMU_QEMU_HARDDISK_868ff441-ad0d-4310-969c-d766af5d9c20'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_868ff441-ad0d-4310-969c-d766af5d9c20-part1', 'scsi-SQEMU_QEMU_HARDDISK_868ff441-ad0d-4310-969c-d766af5d9c20-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_868ff441-ad0d-4310-969c-d766af5d9c20-part14', 'scsi-SQEMU_QEMU_HARDDISK_868ff441-ad0d-4310-969c-d766af5d9c20-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_868ff441-ad0d-4310-969c-d766af5d9c20-part15', 'scsi-SQEMU_QEMU_HARDDISK_868ff441-ad0d-4310-969c-d766af5d9c20-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_868ff441-ad0d-4310-969c-d766af5d9c20-part16', 'scsi-SQEMU_QEMU_HARDDISK_868ff441-ad0d-4310-969c-d766af5d9c20-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155306 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-13-00-02-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155324 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.155336 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155352 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155364 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155375 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155386 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155398 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155421 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155437 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155449 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae0b0158-ec1f-45de-80c9-2bee6f7c9d63', 'scsi-SQEMU_QEMU_HARDDISK_ae0b0158-ec1f-45de-80c9-2bee6f7c9d63'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae0b0158-ec1f-45de-80c9-2bee6f7c9d63-part1', 'scsi-SQEMU_QEMU_HARDDISK_ae0b0158-ec1f-45de-80c9-2bee6f7c9d63-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae0b0158-ec1f-45de-80c9-2bee6f7c9d63-part14', 'scsi-SQEMU_QEMU_HARDDISK_ae0b0158-ec1f-45de-80c9-2bee6f7c9d63-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae0b0158-ec1f-45de-80c9-2bee6f7c9d63-part15', 'scsi-SQEMU_QEMU_HARDDISK_ae0b0158-ec1f-45de-80c9-2bee6f7c9d63-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae0b0158-ec1f-45de-80c9-2bee6f7c9d63-part16', 'scsi-SQEMU_QEMU_HARDDISK_ae0b0158-ec1f-45de-80c9-2bee6f7c9d63-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155462 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-13-00-02-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155479 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.155490 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.155507 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155527 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155539 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155550 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155561 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155573 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155596 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_addre2025-09-13 00:57:45 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:57:45.155838 | orchestrator | ss': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155894 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155916 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8be299e-6f26-4fcd-9ad7-d2c8303193a1', 'scsi-SQEMU_QEMU_HARDDISK_e8be299e-6f26-4fcd-9ad7-d2c8303193a1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8be299e-6f26-4fcd-9ad7-d2c8303193a1-part1', 'scsi-SQEMU_QEMU_HARDDISK_e8be299e-6f26-4fcd-9ad7-d2c8303193a1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8be299e-6f26-4fcd-9ad7-d2c8303193a1-part14', 'scsi-SQEMU_QEMU_HARDDISK_e8be299e-6f26-4fcd-9ad7-d2c8303193a1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8be299e-6f26-4fcd-9ad7-d2c8303193a1-part15', 'scsi-SQEMU_QEMU_HARDDISK_e8be299e-6f26-4fcd-9ad7-d2c8303193a1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8be299e-6f26-4fcd-9ad7-d2c8303193a1-part16', 'scsi-SQEMU_QEMU_HARDDISK_e8be299e-6f26-4fcd-9ad7-d2c8303193a1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155929 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-13-00-02-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:57:45.155951 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.155962 | orchestrator | 2025-09-13 00:57:45.155973 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-13 00:57:45.155984 | orchestrator | Saturday 13 September 2025 00:47:10 +0000 (0:00:01.635) 0:00:41.660 **** 2025-09-13 00:57:45.156002 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.156014 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.156024 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.156035 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.156046 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.156057 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.156068 | orchestrator | 2025-09-13 00:57:45.156079 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-13 00:57:45.156090 | orchestrator | Saturday 13 September 2025 00:47:12 +0000 (0:00:01.404) 0:00:43.064 **** 2025-09-13 00:57:45.156101 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.156111 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.156122 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.156133 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.156143 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.156154 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.156165 | orchestrator | 2025-09-13 00:57:45.156175 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-13 00:57:45.156186 | orchestrator | Saturday 13 September 2025 00:47:12 +0000 (0:00:00.661) 0:00:43.726 **** 2025-09-13 00:57:45.156197 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.156208 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.156219 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.156234 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.156245 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.156256 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.156267 | orchestrator | 2025-09-13 00:57:45.156278 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-13 00:57:45.156289 | orchestrator | Saturday 13 September 2025 00:47:13 +0000 (0:00:01.146) 0:00:44.872 **** 2025-09-13 00:57:45.156300 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.156311 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.156321 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.156332 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.156343 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.156353 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.156364 | orchestrator | 2025-09-13 00:57:45.156375 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-13 00:57:45.156388 | orchestrator | Saturday 13 September 2025 00:47:15 +0000 (0:00:01.328) 0:00:46.200 **** 2025-09-13 00:57:45.156400 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.156413 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.156425 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.156437 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.156449 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.156461 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.156474 | orchestrator | 2025-09-13 00:57:45.156486 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-13 00:57:45.156498 | orchestrator | Saturday 13 September 2025 00:47:16 +0000 (0:00:01.652) 0:00:47.852 **** 2025-09-13 00:57:45.156515 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.156527 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.156537 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.156548 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.156559 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.156569 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.156580 | orchestrator | 2025-09-13 00:57:45.156591 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-13 00:57:45.156602 | orchestrator | Saturday 13 September 2025 00:47:17 +0000 (0:00:00.904) 0:00:48.757 **** 2025-09-13 00:57:45.156612 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-13 00:57:45.156624 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-13 00:57:45.156634 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-13 00:57:45.156645 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-13 00:57:45.156656 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-13 00:57:45.156667 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-13 00:57:45.156677 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-13 00:57:45.156688 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-13 00:57:45.156699 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-09-13 00:57:45.156710 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-13 00:57:45.156720 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-09-13 00:57:45.156731 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-13 00:57:45.156741 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-09-13 00:57:45.156752 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-13 00:57:45.156763 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-13 00:57:45.156773 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-09-13 00:57:45.156784 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-09-13 00:57:45.156795 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-09-13 00:57:45.156806 | orchestrator | 2025-09-13 00:57:45.156816 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-13 00:57:45.156827 | orchestrator | Saturday 13 September 2025 00:47:23 +0000 (0:00:05.613) 0:00:54.372 **** 2025-09-13 00:57:45.156838 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-13 00:57:45.156849 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-13 00:57:45.156912 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-13 00:57:45.156923 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.156934 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-13 00:57:45.156945 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-13 00:57:45.156955 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-13 00:57:45.156966 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.156976 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-13 00:57:45.156987 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-13 00:57:45.157004 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-13 00:57:45.157015 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-13 00:57:45.157026 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-13 00:57:45.157036 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-13 00:57:45.157047 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.157058 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-13 00:57:45.157068 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-13 00:57:45.157079 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-13 00:57:45.157097 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.157108 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.157118 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-13 00:57:45.157129 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-13 00:57:45.157140 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-13 00:57:45.157150 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.157161 | orchestrator | 2025-09-13 00:57:45.157171 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-13 00:57:45.157187 | orchestrator | Saturday 13 September 2025 00:47:24 +0000 (0:00:01.247) 0:00:55.619 **** 2025-09-13 00:57:45.157198 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.157209 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.157220 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.157231 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:57:45.157242 | orchestrator | 2025-09-13 00:57:45.157253 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-13 00:57:45.157265 | orchestrator | Saturday 13 September 2025 00:47:25 +0000 (0:00:01.331) 0:00:56.951 **** 2025-09-13 00:57:45.157276 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.157286 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.157297 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.157308 | orchestrator | 2025-09-13 00:57:45.157319 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-13 00:57:45.157329 | orchestrator | Saturday 13 September 2025 00:47:26 +0000 (0:00:00.383) 0:00:57.335 **** 2025-09-13 00:57:45.157339 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.157349 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.157358 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.157367 | orchestrator | 2025-09-13 00:57:45.157377 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-13 00:57:45.157386 | orchestrator | Saturday 13 September 2025 00:47:26 +0000 (0:00:00.671) 0:00:58.006 **** 2025-09-13 00:57:45.157396 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.157405 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.157415 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.157424 | orchestrator | 2025-09-13 00:57:45.157434 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-13 00:57:45.157443 | orchestrator | Saturday 13 September 2025 00:47:27 +0000 (0:00:00.475) 0:00:58.481 **** 2025-09-13 00:57:45.157453 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.157462 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.157472 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.157481 | orchestrator | 2025-09-13 00:57:45.157491 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-13 00:57:45.157500 | orchestrator | Saturday 13 September 2025 00:47:28 +0000 (0:00:01.228) 0:00:59.709 **** 2025-09-13 00:57:45.157510 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-13 00:57:45.157519 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-13 00:57:45.157529 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-13 00:57:45.157538 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.157548 | orchestrator | 2025-09-13 00:57:45.157557 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-13 00:57:45.157567 | orchestrator | Saturday 13 September 2025 00:47:29 +0000 (0:00:00.744) 0:01:00.454 **** 2025-09-13 00:57:45.157577 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-13 00:57:45.157586 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-13 00:57:45.157596 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-13 00:57:45.157611 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.157620 | orchestrator | 2025-09-13 00:57:45.157630 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-13 00:57:45.157639 | orchestrator | Saturday 13 September 2025 00:47:30 +0000 (0:00:00.591) 0:01:01.046 **** 2025-09-13 00:57:45.157649 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-13 00:57:45.157658 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-13 00:57:45.157668 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-13 00:57:45.157677 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.157686 | orchestrator | 2025-09-13 00:57:45.157696 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-13 00:57:45.157705 | orchestrator | Saturday 13 September 2025 00:47:30 +0000 (0:00:00.458) 0:01:01.505 **** 2025-09-13 00:57:45.157720 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.157736 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.157753 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.157768 | orchestrator | 2025-09-13 00:57:45.157782 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-13 00:57:45.157798 | orchestrator | Saturday 13 September 2025 00:47:30 +0000 (0:00:00.355) 0:01:01.860 **** 2025-09-13 00:57:45.157813 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-13 00:57:45.157828 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-13 00:57:45.157843 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-13 00:57:45.157878 | orchestrator | 2025-09-13 00:57:45.157900 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-13 00:57:45.157914 | orchestrator | Saturday 13 September 2025 00:47:31 +0000 (0:00:00.973) 0:01:02.834 **** 2025-09-13 00:57:45.157928 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-13 00:57:45.157941 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-13 00:57:45.157955 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-13 00:57:45.157969 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-13 00:57:45.157983 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-13 00:57:45.157998 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-13 00:57:45.158012 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-13 00:57:45.158082 | orchestrator | 2025-09-13 00:57:45.158098 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-13 00:57:45.158124 | orchestrator | Saturday 13 September 2025 00:47:33 +0000 (0:00:01.309) 0:01:04.144 **** 2025-09-13 00:57:45.158139 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-13 00:57:45.158148 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-13 00:57:45.158158 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-13 00:57:45.158167 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-13 00:57:45.158177 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-13 00:57:45.158186 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-13 00:57:45.158196 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-13 00:57:45.158205 | orchestrator | 2025-09-13 00:57:45.158215 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-13 00:57:45.158224 | orchestrator | Saturday 13 September 2025 00:47:34 +0000 (0:00:01.841) 0:01:05.986 **** 2025-09-13 00:57:45.158235 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:57:45.158255 | orchestrator | 2025-09-13 00:57:45.158265 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-13 00:57:45.158275 | orchestrator | Saturday 13 September 2025 00:47:36 +0000 (0:00:01.109) 0:01:07.095 **** 2025-09-13 00:57:45.158284 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:57:45.158294 | orchestrator | 2025-09-13 00:57:45.158303 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-13 00:57:45.158313 | orchestrator | Saturday 13 September 2025 00:47:37 +0000 (0:00:01.229) 0:01:08.324 **** 2025-09-13 00:57:45.158322 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.158332 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.158342 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.158351 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.158361 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.158370 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.158380 | orchestrator | 2025-09-13 00:57:45.158389 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-13 00:57:45.158399 | orchestrator | Saturday 13 September 2025 00:47:38 +0000 (0:00:01.434) 0:01:09.759 **** 2025-09-13 00:57:45.158409 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.158418 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.158428 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.158437 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.158447 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.158456 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.158466 | orchestrator | 2025-09-13 00:57:45.158476 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-13 00:57:45.158485 | orchestrator | Saturday 13 September 2025 00:47:39 +0000 (0:00:01.243) 0:01:11.002 **** 2025-09-13 00:57:45.158495 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.158504 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.158514 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.158523 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.158533 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.158543 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.158552 | orchestrator | 2025-09-13 00:57:45.158562 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-13 00:57:45.158571 | orchestrator | Saturday 13 September 2025 00:47:41 +0000 (0:00:01.752) 0:01:12.755 **** 2025-09-13 00:57:45.158581 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.158590 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.158600 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.158609 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.158619 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.158628 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.158638 | orchestrator | 2025-09-13 00:57:45.158647 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-13 00:57:45.158657 | orchestrator | Saturday 13 September 2025 00:47:42 +0000 (0:00:01.110) 0:01:13.865 **** 2025-09-13 00:57:45.158666 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.158676 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.158685 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.158695 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.158705 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.158714 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.158723 | orchestrator | 2025-09-13 00:57:45.158733 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-13 00:57:45.158758 | orchestrator | Saturday 13 September 2025 00:47:44 +0000 (0:00:01.524) 0:01:15.389 **** 2025-09-13 00:57:45.158769 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.158778 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.158788 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.158804 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.158813 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.158823 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.158832 | orchestrator | 2025-09-13 00:57:45.158842 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-13 00:57:45.158851 | orchestrator | Saturday 13 September 2025 00:47:45 +0000 (0:00:00.932) 0:01:16.322 **** 2025-09-13 00:57:45.158878 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.158888 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.158897 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.158907 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.158916 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.158925 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.158935 | orchestrator | 2025-09-13 00:57:45.158944 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-13 00:57:45.158959 | orchestrator | Saturday 13 September 2025 00:47:45 +0000 (0:00:00.671) 0:01:16.994 **** 2025-09-13 00:57:45.158969 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.158978 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.158988 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.158997 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.159007 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.159016 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.159026 | orchestrator | 2025-09-13 00:57:45.159035 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-13 00:57:45.159045 | orchestrator | Saturday 13 September 2025 00:47:47 +0000 (0:00:01.303) 0:01:18.298 **** 2025-09-13 00:57:45.159054 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.159064 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.159073 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.159083 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.159092 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.159102 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.159111 | orchestrator | 2025-09-13 00:57:45.159121 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-13 00:57:45.159130 | orchestrator | Saturday 13 September 2025 00:47:48 +0000 (0:00:01.035) 0:01:19.333 **** 2025-09-13 00:57:45.159140 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.159150 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.159159 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.159169 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.159178 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.159188 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.159198 | orchestrator | 2025-09-13 00:57:45.159207 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-13 00:57:45.159217 | orchestrator | Saturday 13 September 2025 00:47:49 +0000 (0:00:00.821) 0:01:20.155 **** 2025-09-13 00:57:45.159226 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.159236 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.159245 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.159255 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.159264 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.159274 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.159283 | orchestrator | 2025-09-13 00:57:45.159293 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-13 00:57:45.159303 | orchestrator | Saturday 13 September 2025 00:47:49 +0000 (0:00:00.653) 0:01:20.808 **** 2025-09-13 00:57:45.159312 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.159322 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.159331 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.159341 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.159350 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.159360 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.159369 | orchestrator | 2025-09-13 00:57:45.159379 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-13 00:57:45.159394 | orchestrator | Saturday 13 September 2025 00:47:50 +0000 (0:00:00.744) 0:01:21.553 **** 2025-09-13 00:57:45.159404 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.159414 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.159423 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.159432 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.159442 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.159452 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.159461 | orchestrator | 2025-09-13 00:57:45.159471 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-13 00:57:45.159480 | orchestrator | Saturday 13 September 2025 00:47:51 +0000 (0:00:00.528) 0:01:22.082 **** 2025-09-13 00:57:45.159490 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.159499 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.159509 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.159519 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.159528 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.159537 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.159547 | orchestrator | 2025-09-13 00:57:45.159556 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-13 00:57:45.159566 | orchestrator | Saturday 13 September 2025 00:47:51 +0000 (0:00:00.876) 0:01:22.959 **** 2025-09-13 00:57:45.159575 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.159585 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.159594 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.159604 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.159613 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.159623 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.159632 | orchestrator | 2025-09-13 00:57:45.159642 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-13 00:57:45.159651 | orchestrator | Saturday 13 September 2025 00:47:52 +0000 (0:00:00.575) 0:01:23.534 **** 2025-09-13 00:57:45.159661 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.159670 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.159680 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.159689 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.159699 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.159708 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.159718 | orchestrator | 2025-09-13 00:57:45.159732 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-13 00:57:45.159742 | orchestrator | Saturday 13 September 2025 00:47:53 +0000 (0:00:00.687) 0:01:24.222 **** 2025-09-13 00:57:45.159752 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.159761 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.159771 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.159780 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.159790 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.159799 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.159809 | orchestrator | 2025-09-13 00:57:45.159818 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-13 00:57:45.159828 | orchestrator | Saturday 13 September 2025 00:47:53 +0000 (0:00:00.619) 0:01:24.841 **** 2025-09-13 00:57:45.159837 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.159847 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.159902 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.159913 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.159922 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.159932 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.159941 | orchestrator | 2025-09-13 00:57:45.159951 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-13 00:57:45.159960 | orchestrator | Saturday 13 September 2025 00:47:54 +0000 (0:00:00.649) 0:01:25.491 **** 2025-09-13 00:57:45.159970 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.159979 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.159995 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.160004 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.160014 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.160023 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.160033 | orchestrator | 2025-09-13 00:57:45.160043 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-09-13 00:57:45.160052 | orchestrator | Saturday 13 September 2025 00:47:55 +0000 (0:00:01.082) 0:01:26.573 **** 2025-09-13 00:57:45.160062 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:57:45.160072 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:57:45.160081 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:57:45.160090 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:57:45.160100 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:57:45.160109 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:57:45.160119 | orchestrator | 2025-09-13 00:57:45.160128 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-09-13 00:57:45.160137 | orchestrator | Saturday 13 September 2025 00:47:56 +0000 (0:00:01.389) 0:01:27.962 **** 2025-09-13 00:57:45.160145 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:57:45.160153 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:57:45.160160 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:57:45.160168 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:57:45.160176 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:57:45.160183 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:57:45.160191 | orchestrator | 2025-09-13 00:57:45.160199 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-09-13 00:57:45.160207 | orchestrator | Saturday 13 September 2025 00:47:59 +0000 (0:00:02.475) 0:01:30.437 **** 2025-09-13 00:57:45.160215 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:57:45.160223 | orchestrator | 2025-09-13 00:57:45.160231 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-09-13 00:57:45.160238 | orchestrator | Saturday 13 September 2025 00:48:00 +0000 (0:00:01.159) 0:01:31.596 **** 2025-09-13 00:57:45.160246 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.160254 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.160262 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.160269 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.160277 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.160285 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.160293 | orchestrator | 2025-09-13 00:57:45.160300 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-09-13 00:57:45.160308 | orchestrator | Saturday 13 September 2025 00:48:01 +0000 (0:00:00.567) 0:01:32.163 **** 2025-09-13 00:57:45.160316 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.160324 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.160332 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.160339 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.160347 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.160355 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.160362 | orchestrator | 2025-09-13 00:57:45.160370 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-09-13 00:57:45.160378 | orchestrator | Saturday 13 September 2025 00:48:01 +0000 (0:00:00.788) 0:01:32.952 **** 2025-09-13 00:57:45.160386 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-13 00:57:45.160394 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-13 00:57:45.160402 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-13 00:57:45.160410 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-13 00:57:45.160417 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-13 00:57:45.160434 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-13 00:57:45.160442 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-13 00:57:45.160450 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-13 00:57:45.160458 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-13 00:57:45.160465 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-13 00:57:45.160473 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-13 00:57:45.160486 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-13 00:57:45.160494 | orchestrator | 2025-09-13 00:57:45.160502 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-09-13 00:57:45.160509 | orchestrator | Saturday 13 September 2025 00:48:03 +0000 (0:00:01.329) 0:01:34.281 **** 2025-09-13 00:57:45.160540 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:57:45.160549 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:57:45.160557 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:57:45.160565 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:57:45.160573 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:57:45.160580 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:57:45.160588 | orchestrator | 2025-09-13 00:57:45.160596 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-09-13 00:57:45.160604 | orchestrator | Saturday 13 September 2025 00:48:04 +0000 (0:00:01.059) 0:01:35.341 **** 2025-09-13 00:57:45.160611 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.160619 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.160627 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.160635 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.160642 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.160654 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.160662 | orchestrator | 2025-09-13 00:57:45.160669 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-09-13 00:57:45.160677 | orchestrator | Saturday 13 September 2025 00:48:04 +0000 (0:00:00.561) 0:01:35.902 **** 2025-09-13 00:57:45.160685 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.160693 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.160700 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.160708 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.160716 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.160723 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.160731 | orchestrator | 2025-09-13 00:57:45.160739 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-09-13 00:57:45.160747 | orchestrator | Saturday 13 September 2025 00:48:05 +0000 (0:00:00.665) 0:01:36.567 **** 2025-09-13 00:57:45.160755 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.160762 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.160770 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.160778 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.160785 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.160793 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.160801 | orchestrator | 2025-09-13 00:57:45.160808 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-09-13 00:57:45.160816 | orchestrator | Saturday 13 September 2025 00:48:06 +0000 (0:00:00.481) 0:01:37.049 **** 2025-09-13 00:57:45.160824 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:57:45.160832 | orchestrator | 2025-09-13 00:57:45.160840 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-09-13 00:57:45.160864 | orchestrator | Saturday 13 September 2025 00:48:07 +0000 (0:00:00.998) 0:01:38.047 **** 2025-09-13 00:57:45.160873 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.160881 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.160888 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.160896 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.160904 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.160912 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.160920 | orchestrator | 2025-09-13 00:57:45.160927 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-09-13 00:57:45.160936 | orchestrator | Saturday 13 September 2025 00:49:07 +0000 (0:01:00.020) 0:02:38.068 **** 2025-09-13 00:57:45.160943 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-13 00:57:45.160951 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-13 00:57:45.160959 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-13 00:57:45.160967 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.160974 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-13 00:57:45.160982 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-13 00:57:45.160990 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-13 00:57:45.160998 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.161006 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-13 00:57:45.161014 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-13 00:57:45.161022 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-13 00:57:45.161029 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.161037 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-13 00:57:45.161045 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-13 00:57:45.161053 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-13 00:57:45.161061 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.161069 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-13 00:57:45.161076 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-13 00:57:45.161084 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-13 00:57:45.161092 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.161100 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-13 00:57:45.161112 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-13 00:57:45.161120 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-13 00:57:45.161128 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.161136 | orchestrator | 2025-09-13 00:57:45.161144 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-09-13 00:57:45.161152 | orchestrator | Saturday 13 September 2025 00:49:07 +0000 (0:00:00.617) 0:02:38.685 **** 2025-09-13 00:57:45.161159 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.161167 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.161175 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.161183 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.161191 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.161198 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.161206 | orchestrator | 2025-09-13 00:57:45.161214 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-09-13 00:57:45.161222 | orchestrator | Saturday 13 September 2025 00:49:08 +0000 (0:00:00.595) 0:02:39.281 **** 2025-09-13 00:57:45.161230 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.161243 | orchestrator | 2025-09-13 00:57:45.161251 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-09-13 00:57:45.161263 | orchestrator | Saturday 13 September 2025 00:49:08 +0000 (0:00:00.365) 0:02:39.647 **** 2025-09-13 00:57:45.161271 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.161278 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.161286 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.161294 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.161302 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.161310 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.161318 | orchestrator | 2025-09-13 00:57:45.161325 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-09-13 00:57:45.161333 | orchestrator | Saturday 13 September 2025 00:49:09 +0000 (0:00:00.615) 0:02:40.262 **** 2025-09-13 00:57:45.161341 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.161349 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.161357 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.161364 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.161372 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.161380 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.161387 | orchestrator | 2025-09-13 00:57:45.161395 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-09-13 00:57:45.161403 | orchestrator | Saturday 13 September 2025 00:49:10 +0000 (0:00:00.822) 0:02:41.085 **** 2025-09-13 00:57:45.161411 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.161419 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.161426 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.161434 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.161442 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.161449 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.161457 | orchestrator | 2025-09-13 00:57:45.161465 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-09-13 00:57:45.161473 | orchestrator | Saturday 13 September 2025 00:49:10 +0000 (0:00:00.758) 0:02:41.843 **** 2025-09-13 00:57:45.161481 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.161489 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.161496 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.161504 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.161512 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.161520 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.161527 | orchestrator | 2025-09-13 00:57:45.161536 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-09-13 00:57:45.161544 | orchestrator | Saturday 13 September 2025 00:49:13 +0000 (0:00:02.643) 0:02:44.487 **** 2025-09-13 00:57:45.161551 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.161559 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.161567 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.161574 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.161582 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.161590 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.161598 | orchestrator | 2025-09-13 00:57:45.161606 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-09-13 00:57:45.161613 | orchestrator | Saturday 13 September 2025 00:49:14 +0000 (0:00:00.722) 0:02:45.209 **** 2025-09-13 00:57:45.161622 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:57:45.161631 | orchestrator | 2025-09-13 00:57:45.161639 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-09-13 00:57:45.161646 | orchestrator | Saturday 13 September 2025 00:49:15 +0000 (0:00:01.283) 0:02:46.493 **** 2025-09-13 00:57:45.161654 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.161662 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.161670 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.161682 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.161690 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.161698 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.161706 | orchestrator | 2025-09-13 00:57:45.161714 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-09-13 00:57:45.161722 | orchestrator | Saturday 13 September 2025 00:49:16 +0000 (0:00:00.646) 0:02:47.140 **** 2025-09-13 00:57:45.161730 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.161737 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.161745 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.161753 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.161760 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.161768 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.161776 | orchestrator | 2025-09-13 00:57:45.161784 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-09-13 00:57:45.161792 | orchestrator | Saturday 13 September 2025 00:49:17 +0000 (0:00:00.877) 0:02:48.017 **** 2025-09-13 00:57:45.161799 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.161807 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.161815 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.161823 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.161831 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.161843 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.161850 | orchestrator | 2025-09-13 00:57:45.161870 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-09-13 00:57:45.161878 | orchestrator | Saturday 13 September 2025 00:49:17 +0000 (0:00:00.671) 0:02:48.689 **** 2025-09-13 00:57:45.161885 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.161893 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.161901 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.161909 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.161917 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.161924 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.161932 | orchestrator | 2025-09-13 00:57:45.161940 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-09-13 00:57:45.161948 | orchestrator | Saturday 13 September 2025 00:49:18 +0000 (0:00:00.794) 0:02:49.483 **** 2025-09-13 00:57:45.161955 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.161963 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.161971 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.161979 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.161986 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.161994 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.162002 | orchestrator | 2025-09-13 00:57:45.162014 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-09-13 00:57:45.162043 | orchestrator | Saturday 13 September 2025 00:49:19 +0000 (0:00:00.634) 0:02:50.117 **** 2025-09-13 00:57:45.162051 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.162059 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.162067 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.162076 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.162084 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.162092 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.162099 | orchestrator | 2025-09-13 00:57:45.162107 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-09-13 00:57:45.162115 | orchestrator | Saturday 13 September 2025 00:49:19 +0000 (0:00:00.775) 0:02:50.893 **** 2025-09-13 00:57:45.162123 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.162130 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.162138 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.162146 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.162153 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.162161 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.162174 | orchestrator | 2025-09-13 00:57:45.162182 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-09-13 00:57:45.162190 | orchestrator | Saturday 13 September 2025 00:49:20 +0000 (0:00:00.827) 0:02:51.721 **** 2025-09-13 00:57:45.162198 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.162205 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.162213 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.162221 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.162228 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.162236 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.162244 | orchestrator | 2025-09-13 00:57:45.162252 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-09-13 00:57:45.162259 | orchestrator | Saturday 13 September 2025 00:49:22 +0000 (0:00:01.394) 0:02:53.115 **** 2025-09-13 00:57:45.162267 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.162275 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.162283 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.162290 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.162298 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.162306 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.162313 | orchestrator | 2025-09-13 00:57:45.162321 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-09-13 00:57:45.162329 | orchestrator | Saturday 13 September 2025 00:49:23 +0000 (0:00:01.703) 0:02:54.819 **** 2025-09-13 00:57:45.162337 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:57:45.162345 | orchestrator | 2025-09-13 00:57:45.162353 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-09-13 00:57:45.162361 | orchestrator | Saturday 13 September 2025 00:49:25 +0000 (0:00:01.637) 0:02:56.456 **** 2025-09-13 00:57:45.162368 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-09-13 00:57:45.162376 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-09-13 00:57:45.162384 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-09-13 00:57:45.162392 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-09-13 00:57:45.162399 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-09-13 00:57:45.162407 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-09-13 00:57:45.162415 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-09-13 00:57:45.162422 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-09-13 00:57:45.162430 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-09-13 00:57:45.162438 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-09-13 00:57:45.162446 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-09-13 00:57:45.162454 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-09-13 00:57:45.162461 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-09-13 00:57:45.162469 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-09-13 00:57:45.162477 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-09-13 00:57:45.162484 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-09-13 00:57:45.162492 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-09-13 00:57:45.162500 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-09-13 00:57:45.162508 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-09-13 00:57:45.162516 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-09-13 00:57:45.162534 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-09-13 00:57:45.162542 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-09-13 00:57:45.162550 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-09-13 00:57:45.162566 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-09-13 00:57:45.162574 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-09-13 00:57:45.162581 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-09-13 00:57:45.162589 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-09-13 00:57:45.162597 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-09-13 00:57:45.162605 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-09-13 00:57:45.162613 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-09-13 00:57:45.162620 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-09-13 00:57:45.162628 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-09-13 00:57:45.162636 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-09-13 00:57:45.162647 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-09-13 00:57:45.162656 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-09-13 00:57:45.162663 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-09-13 00:57:45.162671 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-09-13 00:57:45.162679 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-09-13 00:57:45.162686 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-09-13 00:57:45.162694 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-09-13 00:57:45.162702 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-09-13 00:57:45.162709 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-09-13 00:57:45.162717 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-09-13 00:57:45.162725 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-09-13 00:57:45.162732 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-09-13 00:57:45.162740 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-09-13 00:57:45.162748 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-13 00:57:45.162756 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-13 00:57:45.162763 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-13 00:57:45.162771 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-13 00:57:45.162779 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-09-13 00:57:45.162786 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-09-13 00:57:45.162794 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-13 00:57:45.162802 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-13 00:57:45.162810 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-13 00:57:45.162817 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-13 00:57:45.162825 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-13 00:57:45.162833 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-13 00:57:45.162841 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-13 00:57:45.162848 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-13 00:57:45.162889 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-13 00:57:45.162898 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-13 00:57:45.162906 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-13 00:57:45.162913 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-13 00:57:45.162921 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-13 00:57:45.162935 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-13 00:57:45.162943 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-13 00:57:45.162950 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-13 00:57:45.162958 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-13 00:57:45.162966 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-13 00:57:45.162974 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-13 00:57:45.162981 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-13 00:57:45.162989 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-13 00:57:45.162997 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-13 00:57:45.163005 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-13 00:57:45.163012 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-13 00:57:45.163020 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-13 00:57:45.163028 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-13 00:57:45.163041 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-13 00:57:45.163049 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-13 00:57:45.163057 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-13 00:57:45.163064 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-09-13 00:57:45.163072 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-09-13 00:57:45.163080 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-13 00:57:45.163088 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-13 00:57:45.163096 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-09-13 00:57:45.163103 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-09-13 00:57:45.163111 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-09-13 00:57:45.163119 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-09-13 00:57:45.163127 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-09-13 00:57:45.163139 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-13 00:57:45.163147 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-09-13 00:57:45.163155 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-09-13 00:57:45.163162 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-09-13 00:57:45.163170 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-09-13 00:57:45.163178 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-09-13 00:57:45.163186 | orchestrator | 2025-09-13 00:57:45.163193 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-09-13 00:57:45.163201 | orchestrator | Saturday 13 September 2025 00:49:32 +0000 (0:00:06.937) 0:03:03.394 **** 2025-09-13 00:57:45.163209 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.163217 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.163224 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.163232 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:57:45.163240 | orchestrator | 2025-09-13 00:57:45.163248 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-09-13 00:57:45.163256 | orchestrator | Saturday 13 September 2025 00:49:33 +0000 (0:00:01.277) 0:03:04.671 **** 2025-09-13 00:57:45.163263 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-13 00:57:45.163277 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-13 00:57:45.163285 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-13 00:57:45.163293 | orchestrator | 2025-09-13 00:57:45.163300 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-09-13 00:57:45.163308 | orchestrator | Saturday 13 September 2025 00:49:34 +0000 (0:00:00.905) 0:03:05.577 **** 2025-09-13 00:57:45.163316 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-13 00:57:45.163324 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-13 00:57:45.163332 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-13 00:57:45.163340 | orchestrator | 2025-09-13 00:57:45.163347 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-09-13 00:57:45.163355 | orchestrator | Saturday 13 September 2025 00:49:36 +0000 (0:00:01.683) 0:03:07.261 **** 2025-09-13 00:57:45.163363 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.163371 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.163379 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.163386 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.163394 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.163402 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.163410 | orchestrator | 2025-09-13 00:57:45.163416 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-09-13 00:57:45.163423 | orchestrator | Saturday 13 September 2025 00:49:37 +0000 (0:00:00.864) 0:03:08.125 **** 2025-09-13 00:57:45.163429 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.163436 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.163443 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.163449 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.163456 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.163462 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.163469 | orchestrator | 2025-09-13 00:57:45.163476 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-09-13 00:57:45.163482 | orchestrator | Saturday 13 September 2025 00:49:37 +0000 (0:00:00.884) 0:03:09.009 **** 2025-09-13 00:57:45.163489 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.163495 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.163502 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.163508 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.163515 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.163521 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.163528 | orchestrator | 2025-09-13 00:57:45.163534 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-09-13 00:57:45.163541 | orchestrator | Saturday 13 September 2025 00:49:38 +0000 (0:00:00.979) 0:03:09.989 **** 2025-09-13 00:57:45.163551 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.163558 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.163564 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.163570 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.163577 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.163583 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.163590 | orchestrator | 2025-09-13 00:57:45.163597 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-09-13 00:57:45.163603 | orchestrator | Saturday 13 September 2025 00:49:39 +0000 (0:00:00.804) 0:03:10.794 **** 2025-09-13 00:57:45.163610 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.163621 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.163627 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.163634 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.163640 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.163647 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.163653 | orchestrator | 2025-09-13 00:57:45.163660 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-09-13 00:57:45.163667 | orchestrator | Saturday 13 September 2025 00:49:40 +0000 (0:00:01.138) 0:03:11.933 **** 2025-09-13 00:57:45.163673 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.163686 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.163693 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.163699 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.163706 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.163712 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.163719 | orchestrator | 2025-09-13 00:57:45.163726 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-09-13 00:57:45.163732 | orchestrator | Saturday 13 September 2025 00:49:41 +0000 (0:00:00.665) 0:03:12.598 **** 2025-09-13 00:57:45.163739 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.163746 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.163752 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.163759 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.163765 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.163772 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.163779 | orchestrator | 2025-09-13 00:57:45.163785 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-09-13 00:57:45.163792 | orchestrator | Saturday 13 September 2025 00:49:42 +0000 (0:00:00.931) 0:03:13.529 **** 2025-09-13 00:57:45.163798 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.163805 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.163812 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.163818 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.163825 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.163831 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.163838 | orchestrator | 2025-09-13 00:57:45.163844 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-09-13 00:57:45.163851 | orchestrator | Saturday 13 September 2025 00:49:43 +0000 (0:00:00.652) 0:03:14.182 **** 2025-09-13 00:57:45.163870 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.163877 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.163883 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.163890 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.163897 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.163903 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.163910 | orchestrator | 2025-09-13 00:57:45.163916 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-09-13 00:57:45.163923 | orchestrator | Saturday 13 September 2025 00:49:46 +0000 (0:00:03.525) 0:03:17.708 **** 2025-09-13 00:57:45.163929 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.163936 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.163942 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.163949 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.163955 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.163962 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.163968 | orchestrator | 2025-09-13 00:57:45.163975 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-09-13 00:57:45.163982 | orchestrator | Saturday 13 September 2025 00:49:47 +0000 (0:00:00.938) 0:03:18.646 **** 2025-09-13 00:57:45.163988 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.163995 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.164001 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.164008 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.164019 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.164025 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.164032 | orchestrator | 2025-09-13 00:57:45.164039 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-09-13 00:57:45.164045 | orchestrator | Saturday 13 September 2025 00:49:48 +0000 (0:00:00.941) 0:03:19.588 **** 2025-09-13 00:57:45.164052 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.164058 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.164065 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.164071 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.164078 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.164084 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.164091 | orchestrator | 2025-09-13 00:57:45.164098 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-09-13 00:57:45.164104 | orchestrator | Saturday 13 September 2025 00:49:49 +0000 (0:00:00.576) 0:03:20.165 **** 2025-09-13 00:57:45.164111 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-13 00:57:45.164118 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-13 00:57:45.164124 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-13 00:57:45.164131 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.164137 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.164144 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.164150 | orchestrator | 2025-09-13 00:57:45.164160 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-09-13 00:57:45.164167 | orchestrator | Saturday 13 September 2025 00:49:49 +0000 (0:00:00.778) 0:03:20.943 **** 2025-09-13 00:57:45.164175 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-09-13 00:57:45.164184 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-09-13 00:57:45.164192 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.164203 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-09-13 00:57:45.164210 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-09-13 00:57:45.164217 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-09-13 00:57:45.164223 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-09-13 00:57:45.164235 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.164241 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.164248 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.164254 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.164261 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.164268 | orchestrator | 2025-09-13 00:57:45.164274 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-09-13 00:57:45.164281 | orchestrator | Saturday 13 September 2025 00:49:50 +0000 (0:00:00.593) 0:03:21.537 **** 2025-09-13 00:57:45.164287 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.164294 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.164300 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.164307 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.164313 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.164320 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.164326 | orchestrator | 2025-09-13 00:57:45.164333 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-09-13 00:57:45.164340 | orchestrator | Saturday 13 September 2025 00:49:51 +0000 (0:00:00.680) 0:03:22.217 **** 2025-09-13 00:57:45.164346 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.164353 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.164359 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.164366 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.164372 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.164379 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.164385 | orchestrator | 2025-09-13 00:57:45.164392 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-13 00:57:45.164399 | orchestrator | Saturday 13 September 2025 00:49:51 +0000 (0:00:00.572) 0:03:22.790 **** 2025-09-13 00:57:45.164405 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.164412 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.164418 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.164425 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.164431 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.164438 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.164444 | orchestrator | 2025-09-13 00:57:45.164451 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-13 00:57:45.164458 | orchestrator | Saturday 13 September 2025 00:49:52 +0000 (0:00:01.155) 0:03:23.945 **** 2025-09-13 00:57:45.164464 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.164471 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.164477 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.164484 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.164491 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.164497 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.164503 | orchestrator | 2025-09-13 00:57:45.164510 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-13 00:57:45.164517 | orchestrator | Saturday 13 September 2025 00:49:53 +0000 (0:00:00.687) 0:03:24.633 **** 2025-09-13 00:57:45.164523 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.164533 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.164540 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.164546 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.164553 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.164559 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.164566 | orchestrator | 2025-09-13 00:57:45.164572 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-13 00:57:45.164579 | orchestrator | Saturday 13 September 2025 00:49:54 +0000 (0:00:00.912) 0:03:25.545 **** 2025-09-13 00:57:45.164586 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.164592 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.164603 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.164610 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.164616 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.164623 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.164629 | orchestrator | 2025-09-13 00:57:45.164636 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-13 00:57:45.164642 | orchestrator | Saturday 13 September 2025 00:49:55 +0000 (0:00:01.157) 0:03:26.703 **** 2025-09-13 00:57:45.164649 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-13 00:57:45.164659 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-13 00:57:45.164665 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-13 00:57:45.164672 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.164679 | orchestrator | 2025-09-13 00:57:45.164685 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-13 00:57:45.164692 | orchestrator | Saturday 13 September 2025 00:49:56 +0000 (0:00:00.620) 0:03:27.324 **** 2025-09-13 00:57:45.164698 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-13 00:57:45.164705 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-13 00:57:45.164712 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-13 00:57:45.164718 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.164725 | orchestrator | 2025-09-13 00:57:45.164731 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-13 00:57:45.164738 | orchestrator | Saturday 13 September 2025 00:49:57 +0000 (0:00:01.225) 0:03:28.549 **** 2025-09-13 00:57:45.164744 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-13 00:57:45.164751 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-13 00:57:45.164757 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-13 00:57:45.164764 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.164770 | orchestrator | 2025-09-13 00:57:45.164777 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-13 00:57:45.164783 | orchestrator | Saturday 13 September 2025 00:49:58 +0000 (0:00:01.113) 0:03:29.663 **** 2025-09-13 00:57:45.164790 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.164796 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.164803 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.164810 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.164816 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.164823 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.164829 | orchestrator | 2025-09-13 00:57:45.164836 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-13 00:57:45.164842 | orchestrator | Saturday 13 September 2025 00:50:00 +0000 (0:00:01.423) 0:03:31.086 **** 2025-09-13 00:57:45.164849 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-13 00:57:45.164883 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-13 00:57:45.164891 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-09-13 00:57:45.164898 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.164904 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-09-13 00:57:45.164911 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-13 00:57:45.164918 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.164925 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-09-13 00:57:45.164931 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.164938 | orchestrator | 2025-09-13 00:57:45.164944 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-09-13 00:57:45.164951 | orchestrator | Saturday 13 September 2025 00:50:03 +0000 (0:00:03.401) 0:03:34.487 **** 2025-09-13 00:57:45.164958 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:57:45.164964 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:57:45.164971 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:57:45.164977 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:57:45.164988 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:57:45.164995 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:57:45.165001 | orchestrator | 2025-09-13 00:57:45.165008 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-13 00:57:45.165015 | orchestrator | Saturday 13 September 2025 00:50:06 +0000 (0:00:02.731) 0:03:37.219 **** 2025-09-13 00:57:45.165021 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:57:45.165028 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:57:45.165034 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:57:45.165041 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:57:45.165047 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:57:45.165054 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:57:45.165061 | orchestrator | 2025-09-13 00:57:45.165067 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-13 00:57:45.165074 | orchestrator | Saturday 13 September 2025 00:50:07 +0000 (0:00:01.561) 0:03:38.780 **** 2025-09-13 00:57:45.165080 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.165087 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.165093 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.165100 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:57:45.165107 | orchestrator | 2025-09-13 00:57:45.165113 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-13 00:57:45.165120 | orchestrator | Saturday 13 September 2025 00:50:08 +0000 (0:00:01.040) 0:03:39.820 **** 2025-09-13 00:57:45.165126 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.165133 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.165140 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.165146 | orchestrator | 2025-09-13 00:57:45.165157 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-13 00:57:45.165164 | orchestrator | Saturday 13 September 2025 00:50:09 +0000 (0:00:00.363) 0:03:40.184 **** 2025-09-13 00:57:45.165171 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:57:45.165178 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:57:45.165184 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:57:45.165191 | orchestrator | 2025-09-13 00:57:45.165198 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-13 00:57:45.165204 | orchestrator | Saturday 13 September 2025 00:50:10 +0000 (0:00:01.379) 0:03:41.563 **** 2025-09-13 00:57:45.165211 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-13 00:57:45.165218 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-13 00:57:45.165224 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-13 00:57:45.165231 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.165238 | orchestrator | 2025-09-13 00:57:45.165244 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-13 00:57:45.165251 | orchestrator | Saturday 13 September 2025 00:50:11 +0000 (0:00:00.639) 0:03:42.202 **** 2025-09-13 00:57:45.165262 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.165269 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.165275 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.165282 | orchestrator | 2025-09-13 00:57:45.165289 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-13 00:57:45.165295 | orchestrator | Saturday 13 September 2025 00:50:11 +0000 (0:00:00.425) 0:03:42.628 **** 2025-09-13 00:57:45.165302 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.165309 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.165315 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.165322 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:57:45.165329 | orchestrator | 2025-09-13 00:57:45.165335 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-13 00:57:45.165342 | orchestrator | Saturday 13 September 2025 00:50:12 +0000 (0:00:01.082) 0:03:43.710 **** 2025-09-13 00:57:45.165353 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-13 00:57:45.165360 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-13 00:57:45.165366 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-13 00:57:45.165373 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.165380 | orchestrator | 2025-09-13 00:57:45.165386 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-13 00:57:45.165393 | orchestrator | Saturday 13 September 2025 00:50:13 +0000 (0:00:00.847) 0:03:44.558 **** 2025-09-13 00:57:45.165400 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.165406 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.165413 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.165419 | orchestrator | 2025-09-13 00:57:45.165426 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-13 00:57:45.165433 | orchestrator | Saturday 13 September 2025 00:50:13 +0000 (0:00:00.337) 0:03:44.896 **** 2025-09-13 00:57:45.165439 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.165445 | orchestrator | 2025-09-13 00:57:45.165451 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-13 00:57:45.165457 | orchestrator | Saturday 13 September 2025 00:50:14 +0000 (0:00:00.505) 0:03:45.402 **** 2025-09-13 00:57:45.165464 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.165470 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.165476 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.165482 | orchestrator | 2025-09-13 00:57:45.165488 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-13 00:57:45.165494 | orchestrator | Saturday 13 September 2025 00:50:14 +0000 (0:00:00.276) 0:03:45.679 **** 2025-09-13 00:57:45.165500 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.165507 | orchestrator | 2025-09-13 00:57:45.165513 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-13 00:57:45.165519 | orchestrator | Saturday 13 September 2025 00:50:14 +0000 (0:00:00.204) 0:03:45.884 **** 2025-09-13 00:57:45.165525 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.165531 | orchestrator | 2025-09-13 00:57:45.165537 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-13 00:57:45.165544 | orchestrator | Saturday 13 September 2025 00:50:15 +0000 (0:00:00.250) 0:03:46.134 **** 2025-09-13 00:57:45.165550 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.165556 | orchestrator | 2025-09-13 00:57:45.165562 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-13 00:57:45.165568 | orchestrator | Saturday 13 September 2025 00:50:15 +0000 (0:00:00.134) 0:03:46.268 **** 2025-09-13 00:57:45.165574 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.165581 | orchestrator | 2025-09-13 00:57:45.165587 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-13 00:57:45.165593 | orchestrator | Saturday 13 September 2025 00:50:15 +0000 (0:00:00.222) 0:03:46.491 **** 2025-09-13 00:57:45.165599 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.165605 | orchestrator | 2025-09-13 00:57:45.165611 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-13 00:57:45.165618 | orchestrator | Saturday 13 September 2025 00:50:15 +0000 (0:00:00.189) 0:03:46.680 **** 2025-09-13 00:57:45.165624 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-13 00:57:45.165630 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-13 00:57:45.165636 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-13 00:57:45.165642 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.165648 | orchestrator | 2025-09-13 00:57:45.165655 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-13 00:57:45.165661 | orchestrator | Saturday 13 September 2025 00:50:16 +0000 (0:00:00.369) 0:03:47.050 **** 2025-09-13 00:57:45.165667 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.165680 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.165687 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.165693 | orchestrator | 2025-09-13 00:57:45.165700 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-13 00:57:45.165706 | orchestrator | Saturday 13 September 2025 00:50:16 +0000 (0:00:00.567) 0:03:47.617 **** 2025-09-13 00:57:45.165712 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.165719 | orchestrator | 2025-09-13 00:57:45.165725 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-13 00:57:45.165731 | orchestrator | Saturday 13 September 2025 00:50:16 +0000 (0:00:00.304) 0:03:47.921 **** 2025-09-13 00:57:45.165737 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.165743 | orchestrator | 2025-09-13 00:57:45.165749 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-13 00:57:45.165756 | orchestrator | Saturday 13 September 2025 00:50:17 +0000 (0:00:00.294) 0:03:48.216 **** 2025-09-13 00:57:45.165762 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.165768 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.165774 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.165783 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:57:45.165790 | orchestrator | 2025-09-13 00:57:45.165796 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-13 00:57:45.165803 | orchestrator | Saturday 13 September 2025 00:50:18 +0000 (0:00:01.121) 0:03:49.337 **** 2025-09-13 00:57:45.165809 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.165815 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.165821 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.165827 | orchestrator | 2025-09-13 00:57:45.165833 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-13 00:57:45.165840 | orchestrator | Saturday 13 September 2025 00:50:18 +0000 (0:00:00.573) 0:03:49.911 **** 2025-09-13 00:57:45.165846 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:57:45.165862 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:57:45.165869 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:57:45.165875 | orchestrator | 2025-09-13 00:57:45.165881 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-13 00:57:45.165888 | orchestrator | Saturday 13 September 2025 00:50:20 +0000 (0:00:01.205) 0:03:51.117 **** 2025-09-13 00:57:45.165894 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-13 00:57:45.165900 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-13 00:57:45.165906 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-13 00:57:45.165912 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.165919 | orchestrator | 2025-09-13 00:57:45.165925 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-13 00:57:45.165931 | orchestrator | Saturday 13 September 2025 00:50:20 +0000 (0:00:00.675) 0:03:51.792 **** 2025-09-13 00:57:45.165937 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.165944 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.165950 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.165956 | orchestrator | 2025-09-13 00:57:45.165962 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-13 00:57:45.165968 | orchestrator | Saturday 13 September 2025 00:50:21 +0000 (0:00:00.329) 0:03:52.121 **** 2025-09-13 00:57:45.165974 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.165980 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.165986 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.165993 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:57:45.165999 | orchestrator | 2025-09-13 00:57:45.166005 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-13 00:57:45.166011 | orchestrator | Saturday 13 September 2025 00:50:22 +0000 (0:00:01.330) 0:03:53.452 **** 2025-09-13 00:57:45.166089 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.166096 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.166103 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.166109 | orchestrator | 2025-09-13 00:57:45.166115 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-13 00:57:45.166121 | orchestrator | Saturday 13 September 2025 00:50:22 +0000 (0:00:00.362) 0:03:53.814 **** 2025-09-13 00:57:45.166128 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:57:45.166134 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:57:45.166140 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:57:45.166146 | orchestrator | 2025-09-13 00:57:45.166152 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-13 00:57:45.166158 | orchestrator | Saturday 13 September 2025 00:50:24 +0000 (0:00:01.474) 0:03:55.289 **** 2025-09-13 00:57:45.166164 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-13 00:57:45.166171 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-13 00:57:45.166177 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-13 00:57:45.166183 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.166190 | orchestrator | 2025-09-13 00:57:45.166196 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-13 00:57:45.166202 | orchestrator | Saturday 13 September 2025 00:50:24 +0000 (0:00:00.582) 0:03:55.872 **** 2025-09-13 00:57:45.166208 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.166214 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.166221 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.166227 | orchestrator | 2025-09-13 00:57:45.166233 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-09-13 00:57:45.166239 | orchestrator | Saturday 13 September 2025 00:50:25 +0000 (0:00:00.367) 0:03:56.239 **** 2025-09-13 00:57:45.166245 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.166252 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.166258 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.166264 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.166270 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.166276 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.166283 | orchestrator | 2025-09-13 00:57:45.166289 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-13 00:57:45.166313 | orchestrator | Saturday 13 September 2025 00:50:25 +0000 (0:00:00.754) 0:03:56.993 **** 2025-09-13 00:57:45.166321 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.166327 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.166333 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.166340 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:57:45.166346 | orchestrator | 2025-09-13 00:57:45.166352 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-13 00:57:45.166358 | orchestrator | Saturday 13 September 2025 00:50:27 +0000 (0:00:01.158) 0:03:58.151 **** 2025-09-13 00:57:45.166364 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.166371 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.166377 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.166383 | orchestrator | 2025-09-13 00:57:45.166389 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-13 00:57:45.166396 | orchestrator | Saturday 13 September 2025 00:50:27 +0000 (0:00:00.309) 0:03:58.461 **** 2025-09-13 00:57:45.166402 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:57:45.166408 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:57:45.166418 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:57:45.166425 | orchestrator | 2025-09-13 00:57:45.166431 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-13 00:57:45.166437 | orchestrator | Saturday 13 September 2025 00:50:29 +0000 (0:00:01.653) 0:04:00.114 **** 2025-09-13 00:57:45.166449 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-13 00:57:45.166455 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-13 00:57:45.166461 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-13 00:57:45.166468 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.166474 | orchestrator | 2025-09-13 00:57:45.166480 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-13 00:57:45.166486 | orchestrator | Saturday 13 September 2025 00:50:29 +0000 (0:00:00.743) 0:04:00.858 **** 2025-09-13 00:57:45.166492 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.166499 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.166505 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.166511 | orchestrator | 2025-09-13 00:57:45.166517 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-09-13 00:57:45.166523 | orchestrator | 2025-09-13 00:57:45.166529 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-13 00:57:45.166536 | orchestrator | Saturday 13 September 2025 00:50:30 +0000 (0:00:00.563) 0:04:01.422 **** 2025-09-13 00:57:45.166542 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:57:45.166548 | orchestrator | 2025-09-13 00:57:45.166554 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-13 00:57:45.166560 | orchestrator | Saturday 13 September 2025 00:50:30 +0000 (0:00:00.510) 0:04:01.932 **** 2025-09-13 00:57:45.166566 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:57:45.166573 | orchestrator | 2025-09-13 00:57:45.166579 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-13 00:57:45.166585 | orchestrator | Saturday 13 September 2025 00:50:31 +0000 (0:00:00.376) 0:04:02.308 **** 2025-09-13 00:57:45.166591 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.166597 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.166603 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.166609 | orchestrator | 2025-09-13 00:57:45.166616 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-13 00:57:45.166622 | orchestrator | Saturday 13 September 2025 00:50:31 +0000 (0:00:00.639) 0:04:02.948 **** 2025-09-13 00:57:45.166628 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.166634 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.166640 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.166647 | orchestrator | 2025-09-13 00:57:45.166653 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-13 00:57:45.166659 | orchestrator | Saturday 13 September 2025 00:50:32 +0000 (0:00:00.315) 0:04:03.263 **** 2025-09-13 00:57:45.166665 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.166671 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.166677 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.166684 | orchestrator | 2025-09-13 00:57:45.166690 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-13 00:57:45.166696 | orchestrator | Saturday 13 September 2025 00:50:32 +0000 (0:00:00.446) 0:04:03.709 **** 2025-09-13 00:57:45.166702 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.166708 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.166715 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.166721 | orchestrator | 2025-09-13 00:57:45.166727 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-13 00:57:45.166733 | orchestrator | Saturday 13 September 2025 00:50:32 +0000 (0:00:00.261) 0:04:03.971 **** 2025-09-13 00:57:45.166740 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.166746 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.166752 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.166758 | orchestrator | 2025-09-13 00:57:45.166765 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-13 00:57:45.166775 | orchestrator | Saturday 13 September 2025 00:50:33 +0000 (0:00:00.674) 0:04:04.646 **** 2025-09-13 00:57:45.166781 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.166788 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.166794 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.166800 | orchestrator | 2025-09-13 00:57:45.166806 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-13 00:57:45.166813 | orchestrator | Saturday 13 September 2025 00:50:33 +0000 (0:00:00.301) 0:04:04.947 **** 2025-09-13 00:57:45.166819 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.166825 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.166831 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.166837 | orchestrator | 2025-09-13 00:57:45.166870 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-13 00:57:45.166878 | orchestrator | Saturday 13 September 2025 00:50:34 +0000 (0:00:00.415) 0:04:05.363 **** 2025-09-13 00:57:45.166884 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.166890 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.166897 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.166903 | orchestrator | 2025-09-13 00:57:45.166910 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-13 00:57:45.166916 | orchestrator | Saturday 13 September 2025 00:50:35 +0000 (0:00:00.647) 0:04:06.011 **** 2025-09-13 00:57:45.166922 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.166928 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.166934 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.166940 | orchestrator | 2025-09-13 00:57:45.166946 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-13 00:57:45.166953 | orchestrator | Saturday 13 September 2025 00:50:35 +0000 (0:00:00.731) 0:04:06.742 **** 2025-09-13 00:57:45.166959 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.166965 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.166971 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.166977 | orchestrator | 2025-09-13 00:57:45.166990 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-13 00:57:45.166997 | orchestrator | Saturday 13 September 2025 00:50:35 +0000 (0:00:00.255) 0:04:06.997 **** 2025-09-13 00:57:45.167003 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.167009 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.167015 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.167021 | orchestrator | 2025-09-13 00:57:45.167027 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-13 00:57:45.167034 | orchestrator | Saturday 13 September 2025 00:50:36 +0000 (0:00:00.418) 0:04:07.415 **** 2025-09-13 00:57:45.167040 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.167046 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.167052 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.167058 | orchestrator | 2025-09-13 00:57:45.167064 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-13 00:57:45.167071 | orchestrator | Saturday 13 September 2025 00:50:36 +0000 (0:00:00.257) 0:04:07.673 **** 2025-09-13 00:57:45.167077 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.167083 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.167089 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.167095 | orchestrator | 2025-09-13 00:57:45.167101 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-13 00:57:45.167108 | orchestrator | Saturday 13 September 2025 00:50:36 +0000 (0:00:00.269) 0:04:07.942 **** 2025-09-13 00:57:45.167114 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.167120 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.167126 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.167132 | orchestrator | 2025-09-13 00:57:45.167138 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-13 00:57:45.167144 | orchestrator | Saturday 13 September 2025 00:50:37 +0000 (0:00:00.272) 0:04:08.214 **** 2025-09-13 00:57:45.167155 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.167161 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.167167 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.167173 | orchestrator | 2025-09-13 00:57:45.167179 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-13 00:57:45.167186 | orchestrator | Saturday 13 September 2025 00:50:37 +0000 (0:00:00.396) 0:04:08.611 **** 2025-09-13 00:57:45.167192 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.167198 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.167204 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.167211 | orchestrator | 2025-09-13 00:57:45.167217 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-13 00:57:45.167223 | orchestrator | Saturday 13 September 2025 00:50:37 +0000 (0:00:00.267) 0:04:08.879 **** 2025-09-13 00:57:45.167229 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.167235 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.167241 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.167247 | orchestrator | 2025-09-13 00:57:45.167254 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-13 00:57:45.167260 | orchestrator | Saturday 13 September 2025 00:50:38 +0000 (0:00:00.295) 0:04:09.174 **** 2025-09-13 00:57:45.167266 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.167272 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.167278 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.167284 | orchestrator | 2025-09-13 00:57:45.167290 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-13 00:57:45.167296 | orchestrator | Saturday 13 September 2025 00:50:38 +0000 (0:00:00.286) 0:04:09.461 **** 2025-09-13 00:57:45.167302 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.167309 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.167315 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.167321 | orchestrator | 2025-09-13 00:57:45.167327 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-09-13 00:57:45.167333 | orchestrator | Saturday 13 September 2025 00:50:39 +0000 (0:00:00.617) 0:04:10.078 **** 2025-09-13 00:57:45.167339 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.167346 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.167352 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.167358 | orchestrator | 2025-09-13 00:57:45.167364 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-09-13 00:57:45.167370 | orchestrator | Saturday 13 September 2025 00:50:39 +0000 (0:00:00.315) 0:04:10.393 **** 2025-09-13 00:57:45.167377 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:57:45.167383 | orchestrator | 2025-09-13 00:57:45.167389 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-09-13 00:57:45.167395 | orchestrator | Saturday 13 September 2025 00:50:39 +0000 (0:00:00.498) 0:04:10.892 **** 2025-09-13 00:57:45.167401 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.167408 | orchestrator | 2025-09-13 00:57:45.167414 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-09-13 00:57:45.167435 | orchestrator | Saturday 13 September 2025 00:50:40 +0000 (0:00:00.390) 0:04:11.283 **** 2025-09-13 00:57:45.167442 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-09-13 00:57:45.167448 | orchestrator | 2025-09-13 00:57:45.167454 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-09-13 00:57:45.167460 | orchestrator | Saturday 13 September 2025 00:50:41 +0000 (0:00:01.032) 0:04:12.315 **** 2025-09-13 00:57:45.167466 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.167472 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.167478 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.167485 | orchestrator | 2025-09-13 00:57:45.167491 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-09-13 00:57:45.167497 | orchestrator | Saturday 13 September 2025 00:50:41 +0000 (0:00:00.347) 0:04:12.663 **** 2025-09-13 00:57:45.167507 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.167513 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.167519 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.167525 | orchestrator | 2025-09-13 00:57:45.167531 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-09-13 00:57:45.167538 | orchestrator | Saturday 13 September 2025 00:50:42 +0000 (0:00:00.372) 0:04:13.035 **** 2025-09-13 00:57:45.167544 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:57:45.167550 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:57:45.167559 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:57:45.167565 | orchestrator | 2025-09-13 00:57:45.167572 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-09-13 00:57:45.167578 | orchestrator | Saturday 13 September 2025 00:50:43 +0000 (0:00:01.209) 0:04:14.245 **** 2025-09-13 00:57:45.167584 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:57:45.167590 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:57:45.167596 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:57:45.167602 | orchestrator | 2025-09-13 00:57:45.167608 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-09-13 00:57:45.167615 | orchestrator | Saturday 13 September 2025 00:50:44 +0000 (0:00:01.008) 0:04:15.253 **** 2025-09-13 00:57:45.167621 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:57:45.167627 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:57:45.167633 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:57:45.167639 | orchestrator | 2025-09-13 00:57:45.167645 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-09-13 00:57:45.167651 | orchestrator | Saturday 13 September 2025 00:50:44 +0000 (0:00:00.666) 0:04:15.919 **** 2025-09-13 00:57:45.167657 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.167663 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.167669 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.167675 | orchestrator | 2025-09-13 00:57:45.167681 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-09-13 00:57:45.167687 | orchestrator | Saturday 13 September 2025 00:50:45 +0000 (0:00:00.655) 0:04:16.575 **** 2025-09-13 00:57:45.167693 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:57:45.167699 | orchestrator | 2025-09-13 00:57:45.167705 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-09-13 00:57:45.167711 | orchestrator | Saturday 13 September 2025 00:50:46 +0000 (0:00:01.303) 0:04:17.878 **** 2025-09-13 00:57:45.167717 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.167723 | orchestrator | 2025-09-13 00:57:45.167730 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-09-13 00:57:45.167736 | orchestrator | Saturday 13 September 2025 00:50:47 +0000 (0:00:00.687) 0:04:18.565 **** 2025-09-13 00:57:45.167742 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-13 00:57:45.167748 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-13 00:57:45.167754 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-13 00:57:45.167760 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-13 00:57:45.167766 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-13 00:57:45.167772 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-13 00:57:45.167779 | orchestrator | changed: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-13 00:57:45.167785 | orchestrator | changed: [testbed-node-1 -> {{ item }}] 2025-09-13 00:57:45.167791 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-13 00:57:45.167797 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-09-13 00:57:45.167803 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-09-13 00:57:45.167809 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-09-13 00:57:45.167815 | orchestrator | 2025-09-13 00:57:45.167821 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-09-13 00:57:45.167832 | orchestrator | Saturday 13 September 2025 00:50:50 +0000 (0:00:03.429) 0:04:21.994 **** 2025-09-13 00:57:45.167838 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:57:45.167844 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:57:45.167850 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:57:45.167886 | orchestrator | 2025-09-13 00:57:45.167893 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-09-13 00:57:45.167899 | orchestrator | Saturday 13 September 2025 00:50:52 +0000 (0:00:01.499) 0:04:23.494 **** 2025-09-13 00:57:45.167905 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.167911 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.167917 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.167923 | orchestrator | 2025-09-13 00:57:45.167928 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-09-13 00:57:45.167933 | orchestrator | Saturday 13 September 2025 00:50:52 +0000 (0:00:00.356) 0:04:23.850 **** 2025-09-13 00:57:45.167939 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.167944 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.167950 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.167955 | orchestrator | 2025-09-13 00:57:45.167960 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-09-13 00:57:45.167966 | orchestrator | Saturday 13 September 2025 00:50:53 +0000 (0:00:00.436) 0:04:24.287 **** 2025-09-13 00:57:45.167971 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:57:45.167976 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:57:45.167982 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:57:45.167987 | orchestrator | 2025-09-13 00:57:45.168009 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-09-13 00:57:45.168015 | orchestrator | Saturday 13 September 2025 00:50:55 +0000 (0:00:01.955) 0:04:26.242 **** 2025-09-13 00:57:45.168020 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:57:45.168026 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:57:45.168031 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:57:45.168036 | orchestrator | 2025-09-13 00:57:45.168042 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-09-13 00:57:45.168047 | orchestrator | Saturday 13 September 2025 00:50:56 +0000 (0:00:01.541) 0:04:27.784 **** 2025-09-13 00:57:45.168052 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.168058 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.168063 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.168068 | orchestrator | 2025-09-13 00:57:45.168074 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-09-13 00:57:45.168079 | orchestrator | Saturday 13 September 2025 00:50:57 +0000 (0:00:00.303) 0:04:28.087 **** 2025-09-13 00:57:45.168084 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:57:45.168090 | orchestrator | 2025-09-13 00:57:45.168099 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-09-13 00:57:45.168104 | orchestrator | Saturday 13 September 2025 00:50:57 +0000 (0:00:00.508) 0:04:28.596 **** 2025-09-13 00:57:45.168110 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.168115 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.168120 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.168126 | orchestrator | 2025-09-13 00:57:45.168131 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-09-13 00:57:45.168136 | orchestrator | Saturday 13 September 2025 00:50:58 +0000 (0:00:00.554) 0:04:29.150 **** 2025-09-13 00:57:45.168141 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.168147 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.168152 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.168158 | orchestrator | 2025-09-13 00:57:45.168163 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-09-13 00:57:45.168168 | orchestrator | Saturday 13 September 2025 00:50:58 +0000 (0:00:00.334) 0:04:29.485 **** 2025-09-13 00:57:45.168178 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:57:45.168183 | orchestrator | 2025-09-13 00:57:45.168189 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-09-13 00:57:45.168194 | orchestrator | Saturday 13 September 2025 00:50:59 +0000 (0:00:00.553) 0:04:30.038 **** 2025-09-13 00:57:45.168199 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:57:45.168204 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:57:45.168210 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:57:45.168215 | orchestrator | 2025-09-13 00:57:45.168221 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-09-13 00:57:45.168226 | orchestrator | Saturday 13 September 2025 00:51:00 +0000 (0:00:01.712) 0:04:31.751 **** 2025-09-13 00:57:45.168231 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:57:45.168237 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:57:45.168242 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:57:45.168247 | orchestrator | 2025-09-13 00:57:45.168253 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-09-13 00:57:45.168258 | orchestrator | Saturday 13 September 2025 00:51:02 +0000 (0:00:01.637) 0:04:33.388 **** 2025-09-13 00:57:45.168264 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:57:45.168269 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:57:45.168274 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:57:45.168280 | orchestrator | 2025-09-13 00:57:45.168286 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-09-13 00:57:45.168291 | orchestrator | Saturday 13 September 2025 00:51:04 +0000 (0:00:01.642) 0:04:35.030 **** 2025-09-13 00:57:45.168296 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:57:45.168302 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:57:45.168307 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:57:45.168313 | orchestrator | 2025-09-13 00:57:45.168318 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-09-13 00:57:45.168323 | orchestrator | Saturday 13 September 2025 00:51:05 +0000 (0:00:01.738) 0:04:36.769 **** 2025-09-13 00:57:45.168329 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:57:45.168335 | orchestrator | 2025-09-13 00:57:45.168340 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-09-13 00:57:45.168345 | orchestrator | Saturday 13 September 2025 00:51:06 +0000 (0:00:00.756) 0:04:37.526 **** 2025-09-13 00:57:45.168351 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-09-13 00:57:45.168356 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.168361 | orchestrator | 2025-09-13 00:57:45.168367 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-09-13 00:57:45.168372 | orchestrator | Saturday 13 September 2025 00:51:28 +0000 (0:00:21.925) 0:04:59.452 **** 2025-09-13 00:57:45.168377 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.168386 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.168395 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.168404 | orchestrator | 2025-09-13 00:57:45.168413 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-09-13 00:57:45.168422 | orchestrator | Saturday 13 September 2025 00:51:38 +0000 (0:00:10.031) 0:05:09.483 **** 2025-09-13 00:57:45.168431 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.168439 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.168444 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.168449 | orchestrator | 2025-09-13 00:57:45.168455 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-09-13 00:57:45.168460 | orchestrator | Saturday 13 September 2025 00:51:38 +0000 (0:00:00.320) 0:05:09.804 **** 2025-09-13 00:57:45.168483 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__357b38c63121bc7a0c7033de4e6112a8c1fb800c'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-09-13 00:57:45.168495 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__357b38c63121bc7a0c7033de4e6112a8c1fb800c'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-09-13 00:57:45.168504 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__357b38c63121bc7a0c7033de4e6112a8c1fb800c'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-09-13 00:57:45.168511 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__357b38c63121bc7a0c7033de4e6112a8c1fb800c'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-09-13 00:57:45.168517 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__357b38c63121bc7a0c7033de4e6112a8c1fb800c'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-09-13 00:57:45.168523 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__357b38c63121bc7a0c7033de4e6112a8c1fb800c'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__357b38c63121bc7a0c7033de4e6112a8c1fb800c'}])  2025-09-13 00:57:45.168529 | orchestrator | 2025-09-13 00:57:45.168534 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-13 00:57:45.168540 | orchestrator | Saturday 13 September 2025 00:51:52 +0000 (0:00:13.348) 0:05:23.153 **** 2025-09-13 00:57:45.168545 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.168550 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.168556 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.168561 | orchestrator | 2025-09-13 00:57:45.168566 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-13 00:57:45.168572 | orchestrator | Saturday 13 September 2025 00:51:52 +0000 (0:00:00.399) 0:05:23.552 **** 2025-09-13 00:57:45.168577 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:57:45.168582 | orchestrator | 2025-09-13 00:57:45.168588 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-13 00:57:45.168593 | orchestrator | Saturday 13 September 2025 00:51:53 +0000 (0:00:00.542) 0:05:24.094 **** 2025-09-13 00:57:45.168598 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.168604 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.168609 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.168614 | orchestrator | 2025-09-13 00:57:45.168620 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-13 00:57:45.168625 | orchestrator | Saturday 13 September 2025 00:51:53 +0000 (0:00:00.695) 0:05:24.790 **** 2025-09-13 00:57:45.168630 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.168636 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.168641 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.168650 | orchestrator | 2025-09-13 00:57:45.168656 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-13 00:57:45.168661 | orchestrator | Saturday 13 September 2025 00:51:54 +0000 (0:00:00.320) 0:05:25.110 **** 2025-09-13 00:57:45.168666 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-13 00:57:45.168672 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-13 00:57:45.168677 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-13 00:57:45.168682 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.168688 | orchestrator | 2025-09-13 00:57:45.168693 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-13 00:57:45.168698 | orchestrator | Saturday 13 September 2025 00:51:54 +0000 (0:00:00.578) 0:05:25.688 **** 2025-09-13 00:57:45.168704 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.168709 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.168714 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.168720 | orchestrator | 2025-09-13 00:57:45.168738 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-09-13 00:57:45.168744 | orchestrator | 2025-09-13 00:57:45.168749 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-13 00:57:45.168755 | orchestrator | Saturday 13 September 2025 00:51:55 +0000 (0:00:00.915) 0:05:26.604 **** 2025-09-13 00:57:45.168760 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:57:45.168766 | orchestrator | 2025-09-13 00:57:45.168771 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-13 00:57:45.168776 | orchestrator | Saturday 13 September 2025 00:51:56 +0000 (0:00:00.542) 0:05:27.147 **** 2025-09-13 00:57:45.168782 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:57:45.168787 | orchestrator | 2025-09-13 00:57:45.168792 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-13 00:57:45.168798 | orchestrator | Saturday 13 September 2025 00:51:56 +0000 (0:00:00.496) 0:05:27.643 **** 2025-09-13 00:57:45.168803 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.168808 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.168817 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.168822 | orchestrator | 2025-09-13 00:57:45.168828 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-13 00:57:45.168833 | orchestrator | Saturday 13 September 2025 00:51:57 +0000 (0:00:01.046) 0:05:28.690 **** 2025-09-13 00:57:45.168839 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.168844 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.168850 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.168866 | orchestrator | 2025-09-13 00:57:45.168871 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-13 00:57:45.168877 | orchestrator | Saturday 13 September 2025 00:51:58 +0000 (0:00:00.336) 0:05:29.026 **** 2025-09-13 00:57:45.168882 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.168887 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.168893 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.168898 | orchestrator | 2025-09-13 00:57:45.168904 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-13 00:57:45.168909 | orchestrator | Saturday 13 September 2025 00:51:58 +0000 (0:00:00.303) 0:05:29.330 **** 2025-09-13 00:57:45.168915 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.168920 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.168925 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.168931 | orchestrator | 2025-09-13 00:57:45.168936 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-13 00:57:45.168942 | orchestrator | Saturday 13 September 2025 00:51:58 +0000 (0:00:00.294) 0:05:29.624 **** 2025-09-13 00:57:45.168947 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.168957 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.168962 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.168968 | orchestrator | 2025-09-13 00:57:45.168973 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-13 00:57:45.168979 | orchestrator | Saturday 13 September 2025 00:51:59 +0000 (0:00:00.978) 0:05:30.603 **** 2025-09-13 00:57:45.168984 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.168990 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.168995 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.169000 | orchestrator | 2025-09-13 00:57:45.169006 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-13 00:57:45.169011 | orchestrator | Saturday 13 September 2025 00:51:59 +0000 (0:00:00.312) 0:05:30.916 **** 2025-09-13 00:57:45.169017 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.169022 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.169027 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.169033 | orchestrator | 2025-09-13 00:57:45.169038 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-13 00:57:45.169044 | orchestrator | Saturday 13 September 2025 00:52:00 +0000 (0:00:00.326) 0:05:31.242 **** 2025-09-13 00:57:45.169049 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.169055 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.169060 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.169066 | orchestrator | 2025-09-13 00:57:45.169071 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-13 00:57:45.169077 | orchestrator | Saturday 13 September 2025 00:52:01 +0000 (0:00:00.793) 0:05:32.036 **** 2025-09-13 00:57:45.169082 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.169087 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.169093 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.169098 | orchestrator | 2025-09-13 00:57:45.169104 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-13 00:57:45.169109 | orchestrator | Saturday 13 September 2025 00:52:01 +0000 (0:00:00.959) 0:05:32.996 **** 2025-09-13 00:57:45.169115 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.169120 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.169125 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.169131 | orchestrator | 2025-09-13 00:57:45.169136 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-13 00:57:45.169142 | orchestrator | Saturday 13 September 2025 00:52:02 +0000 (0:00:00.343) 0:05:33.339 **** 2025-09-13 00:57:45.169147 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.169152 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.169158 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.169163 | orchestrator | 2025-09-13 00:57:45.169168 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-13 00:57:45.169174 | orchestrator | Saturday 13 September 2025 00:52:02 +0000 (0:00:00.316) 0:05:33.656 **** 2025-09-13 00:57:45.169179 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.169185 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.169190 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.169195 | orchestrator | 2025-09-13 00:57:45.169201 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-13 00:57:45.169206 | orchestrator | Saturday 13 September 2025 00:52:02 +0000 (0:00:00.301) 0:05:33.957 **** 2025-09-13 00:57:45.169212 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.169217 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.169237 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.169243 | orchestrator | 2025-09-13 00:57:45.169249 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-13 00:57:45.169254 | orchestrator | Saturday 13 September 2025 00:52:03 +0000 (0:00:00.604) 0:05:34.562 **** 2025-09-13 00:57:45.169259 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.169265 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.169274 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.169280 | orchestrator | 2025-09-13 00:57:45.169285 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-13 00:57:45.169290 | orchestrator | Saturday 13 September 2025 00:52:03 +0000 (0:00:00.328) 0:05:34.890 **** 2025-09-13 00:57:45.169296 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.169301 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.169307 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.169312 | orchestrator | 2025-09-13 00:57:45.169317 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-13 00:57:45.169323 | orchestrator | Saturday 13 September 2025 00:52:04 +0000 (0:00:00.308) 0:05:35.199 **** 2025-09-13 00:57:45.169328 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.169333 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.169342 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.169347 | orchestrator | 2025-09-13 00:57:45.169352 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-13 00:57:45.169358 | orchestrator | Saturday 13 September 2025 00:52:04 +0000 (0:00:00.295) 0:05:35.494 **** 2025-09-13 00:57:45.169363 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.169369 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.169374 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.169379 | orchestrator | 2025-09-13 00:57:45.169385 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-13 00:57:45.169390 | orchestrator | Saturday 13 September 2025 00:52:04 +0000 (0:00:00.316) 0:05:35.811 **** 2025-09-13 00:57:45.169396 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.169401 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.169406 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.169412 | orchestrator | 2025-09-13 00:57:45.169417 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-13 00:57:45.169423 | orchestrator | Saturday 13 September 2025 00:52:05 +0000 (0:00:00.413) 0:05:36.224 **** 2025-09-13 00:57:45.169428 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.169433 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.169439 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.169444 | orchestrator | 2025-09-13 00:57:45.169449 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-09-13 00:57:45.169455 | orchestrator | Saturday 13 September 2025 00:52:05 +0000 (0:00:00.468) 0:05:36.693 **** 2025-09-13 00:57:45.169460 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-13 00:57:45.169466 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-13 00:57:45.169471 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-13 00:57:45.169477 | orchestrator | 2025-09-13 00:57:45.169482 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-09-13 00:57:45.169487 | orchestrator | Saturday 13 September 2025 00:52:06 +0000 (0:00:00.741) 0:05:37.435 **** 2025-09-13 00:57:45.169493 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:57:45.169498 | orchestrator | 2025-09-13 00:57:45.169504 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-09-13 00:57:45.169509 | orchestrator | Saturday 13 September 2025 00:52:07 +0000 (0:00:00.617) 0:05:38.053 **** 2025-09-13 00:57:45.169514 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:57:45.169520 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:57:45.169525 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:57:45.169530 | orchestrator | 2025-09-13 00:57:45.169536 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-09-13 00:57:45.169541 | orchestrator | Saturday 13 September 2025 00:52:07 +0000 (0:00:00.635) 0:05:38.688 **** 2025-09-13 00:57:45.169546 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.169552 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.169557 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.169566 | orchestrator | 2025-09-13 00:57:45.169572 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-09-13 00:57:45.169577 | orchestrator | Saturday 13 September 2025 00:52:07 +0000 (0:00:00.321) 0:05:39.010 **** 2025-09-13 00:57:45.169583 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-13 00:57:45.169588 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-13 00:57:45.169593 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-13 00:57:45.169599 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-09-13 00:57:45.169604 | orchestrator | 2025-09-13 00:57:45.169609 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-09-13 00:57:45.169615 | orchestrator | Saturday 13 September 2025 00:52:18 +0000 (0:00:10.071) 0:05:49.081 **** 2025-09-13 00:57:45.169620 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.169626 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.169631 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.169636 | orchestrator | 2025-09-13 00:57:45.169642 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-09-13 00:57:45.169647 | orchestrator | Saturday 13 September 2025 00:52:18 +0000 (0:00:00.493) 0:05:49.575 **** 2025-09-13 00:57:45.169652 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-13 00:57:45.169658 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-13 00:57:45.169663 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-13 00:57:45.169668 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-13 00:57:45.169674 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-13 00:57:45.169679 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-13 00:57:45.169685 | orchestrator | 2025-09-13 00:57:45.169703 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-09-13 00:57:45.169709 | orchestrator | Saturday 13 September 2025 00:52:20 +0000 (0:00:02.108) 0:05:51.684 **** 2025-09-13 00:57:45.169715 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-13 00:57:45.169720 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-13 00:57:45.169726 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-13 00:57:45.169731 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-13 00:57:45.169736 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-13 00:57:45.169742 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-13 00:57:45.169747 | orchestrator | 2025-09-13 00:57:45.169753 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-09-13 00:57:45.169758 | orchestrator | Saturday 13 September 2025 00:52:21 +0000 (0:00:01.195) 0:05:52.879 **** 2025-09-13 00:57:45.169763 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.169769 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.169774 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.169780 | orchestrator | 2025-09-13 00:57:45.169785 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-09-13 00:57:45.169795 | orchestrator | Saturday 13 September 2025 00:52:22 +0000 (0:00:00.628) 0:05:53.508 **** 2025-09-13 00:57:45.169801 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.169806 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.169811 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.169817 | orchestrator | 2025-09-13 00:57:45.169822 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-09-13 00:57:45.169827 | orchestrator | Saturday 13 September 2025 00:52:22 +0000 (0:00:00.268) 0:05:53.777 **** 2025-09-13 00:57:45.169833 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.169838 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.169843 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.169849 | orchestrator | 2025-09-13 00:57:45.169881 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-09-13 00:57:45.169888 | orchestrator | Saturday 13 September 2025 00:52:23 +0000 (0:00:00.412) 0:05:54.189 **** 2025-09-13 00:57:45.169898 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:57:45.169904 | orchestrator | 2025-09-13 00:57:45.169909 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-09-13 00:57:45.169915 | orchestrator | Saturday 13 September 2025 00:52:23 +0000 (0:00:00.514) 0:05:54.703 **** 2025-09-13 00:57:45.169920 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.169925 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.169931 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.169936 | orchestrator | 2025-09-13 00:57:45.169941 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-09-13 00:57:45.169947 | orchestrator | Saturday 13 September 2025 00:52:23 +0000 (0:00:00.257) 0:05:54.960 **** 2025-09-13 00:57:45.169952 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.169957 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.169963 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.169968 | orchestrator | 2025-09-13 00:57:45.169973 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-09-13 00:57:45.169979 | orchestrator | Saturday 13 September 2025 00:52:24 +0000 (0:00:00.411) 0:05:55.372 **** 2025-09-13 00:57:45.169984 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:57:45.169989 | orchestrator | 2025-09-13 00:57:45.169995 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-09-13 00:57:45.170000 | orchestrator | Saturday 13 September 2025 00:52:24 +0000 (0:00:00.486) 0:05:55.858 **** 2025-09-13 00:57:45.170005 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:57:45.170011 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:57:45.170035 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:57:45.170040 | orchestrator | 2025-09-13 00:57:45.170046 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-09-13 00:57:45.170051 | orchestrator | Saturday 13 September 2025 00:52:26 +0000 (0:00:01.178) 0:05:57.037 **** 2025-09-13 00:57:45.170057 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:57:45.170062 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:57:45.170068 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:57:45.170073 | orchestrator | 2025-09-13 00:57:45.170079 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-09-13 00:57:45.170084 | orchestrator | Saturday 13 September 2025 00:52:27 +0000 (0:00:01.351) 0:05:58.388 **** 2025-09-13 00:57:45.170090 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:57:45.170094 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:57:45.170099 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:57:45.170104 | orchestrator | 2025-09-13 00:57:45.170109 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-09-13 00:57:45.170114 | orchestrator | Saturday 13 September 2025 00:52:29 +0000 (0:00:01.646) 0:06:00.035 **** 2025-09-13 00:57:45.170118 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:57:45.170123 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:57:45.170128 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:57:45.170132 | orchestrator | 2025-09-13 00:57:45.170137 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-09-13 00:57:45.170142 | orchestrator | Saturday 13 September 2025 00:52:30 +0000 (0:00:01.823) 0:06:01.858 **** 2025-09-13 00:57:45.170147 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.170152 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.170156 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-09-13 00:57:45.170161 | orchestrator | 2025-09-13 00:57:45.170166 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-09-13 00:57:45.170171 | orchestrator | Saturday 13 September 2025 00:52:31 +0000 (0:00:00.369) 0:06:02.228 **** 2025-09-13 00:57:45.170176 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-09-13 00:57:45.170200 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-09-13 00:57:45.170205 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-09-13 00:57:45.170210 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-09-13 00:57:45.170215 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-09-13 00:57:45.170220 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-13 00:57:45.170225 | orchestrator | 2025-09-13 00:57:45.170230 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-09-13 00:57:45.170234 | orchestrator | Saturday 13 September 2025 00:53:01 +0000 (0:00:30.467) 0:06:32.695 **** 2025-09-13 00:57:45.170239 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-13 00:57:45.170244 | orchestrator | 2025-09-13 00:57:45.170249 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-09-13 00:57:45.170257 | orchestrator | Saturday 13 September 2025 00:53:03 +0000 (0:00:01.846) 0:06:34.542 **** 2025-09-13 00:57:45.170262 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.170267 | orchestrator | 2025-09-13 00:57:45.170271 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-09-13 00:57:45.170276 | orchestrator | Saturday 13 September 2025 00:53:03 +0000 (0:00:00.332) 0:06:34.875 **** 2025-09-13 00:57:45.170281 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.170286 | orchestrator | 2025-09-13 00:57:45.170291 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-09-13 00:57:45.170296 | orchestrator | Saturday 13 September 2025 00:53:04 +0000 (0:00:00.170) 0:06:35.045 **** 2025-09-13 00:57:45.170300 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-09-13 00:57:45.170305 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-09-13 00:57:45.170310 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-09-13 00:57:45.170315 | orchestrator | 2025-09-13 00:57:45.170320 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-09-13 00:57:45.170324 | orchestrator | Saturday 13 September 2025 00:53:10 +0000 (0:00:06.482) 0:06:41.528 **** 2025-09-13 00:57:45.170329 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-09-13 00:57:45.170334 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-09-13 00:57:45.170339 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-09-13 00:57:45.170344 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-09-13 00:57:45.170348 | orchestrator | 2025-09-13 00:57:45.170353 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-13 00:57:45.170358 | orchestrator | Saturday 13 September 2025 00:53:15 +0000 (0:00:05.151) 0:06:46.679 **** 2025-09-13 00:57:45.170363 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:57:45.170367 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:57:45.170372 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:57:45.170377 | orchestrator | 2025-09-13 00:57:45.170382 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-13 00:57:45.170386 | orchestrator | Saturday 13 September 2025 00:53:16 +0000 (0:00:00.915) 0:06:47.595 **** 2025-09-13 00:57:45.170391 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:57:45.170396 | orchestrator | 2025-09-13 00:57:45.170401 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-13 00:57:45.170405 | orchestrator | Saturday 13 September 2025 00:53:17 +0000 (0:00:00.529) 0:06:48.124 **** 2025-09-13 00:57:45.170414 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.170419 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.170423 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.170428 | orchestrator | 2025-09-13 00:57:45.170433 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-13 00:57:45.170438 | orchestrator | Saturday 13 September 2025 00:53:17 +0000 (0:00:00.317) 0:06:48.442 **** 2025-09-13 00:57:45.170443 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:57:45.170447 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:57:45.170452 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:57:45.170457 | orchestrator | 2025-09-13 00:57:45.170462 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-13 00:57:45.170466 | orchestrator | Saturday 13 September 2025 00:53:19 +0000 (0:00:01.712) 0:06:50.155 **** 2025-09-13 00:57:45.170471 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-13 00:57:45.170476 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-13 00:57:45.170481 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-13 00:57:45.170485 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.170490 | orchestrator | 2025-09-13 00:57:45.170495 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-13 00:57:45.170500 | orchestrator | Saturday 13 September 2025 00:53:19 +0000 (0:00:00.613) 0:06:50.768 **** 2025-09-13 00:57:45.170505 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.170509 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.170514 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.170519 | orchestrator | 2025-09-13 00:57:45.170524 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-09-13 00:57:45.170529 | orchestrator | 2025-09-13 00:57:45.170533 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-13 00:57:45.170538 | orchestrator | Saturday 13 September 2025 00:53:20 +0000 (0:00:00.647) 0:06:51.416 **** 2025-09-13 00:57:45.170543 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:57:45.170548 | orchestrator | 2025-09-13 00:57:45.170564 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-13 00:57:45.170570 | orchestrator | Saturday 13 September 2025 00:53:21 +0000 (0:00:01.024) 0:06:52.441 **** 2025-09-13 00:57:45.170574 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:57:45.170579 | orchestrator | 2025-09-13 00:57:45.170584 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-13 00:57:45.170589 | orchestrator | Saturday 13 September 2025 00:53:22 +0000 (0:00:00.856) 0:06:53.297 **** 2025-09-13 00:57:45.170594 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.170598 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.170603 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.170608 | orchestrator | 2025-09-13 00:57:45.170613 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-13 00:57:45.170618 | orchestrator | Saturday 13 September 2025 00:53:22 +0000 (0:00:00.422) 0:06:53.720 **** 2025-09-13 00:57:45.170623 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.170627 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.170632 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.170637 | orchestrator | 2025-09-13 00:57:45.170645 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-13 00:57:45.170649 | orchestrator | Saturday 13 September 2025 00:53:23 +0000 (0:00:00.953) 0:06:54.674 **** 2025-09-13 00:57:45.170654 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.170659 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.170664 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.170668 | orchestrator | 2025-09-13 00:57:45.170673 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-13 00:57:45.170678 | orchestrator | Saturday 13 September 2025 00:53:24 +0000 (0:00:00.769) 0:06:55.443 **** 2025-09-13 00:57:45.170686 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.170691 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.170696 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.170700 | orchestrator | 2025-09-13 00:57:45.170705 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-13 00:57:45.170710 | orchestrator | Saturday 13 September 2025 00:53:25 +0000 (0:00:00.719) 0:06:56.163 **** 2025-09-13 00:57:45.170715 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.170720 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.170724 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.170729 | orchestrator | 2025-09-13 00:57:45.170734 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-13 00:57:45.170739 | orchestrator | Saturday 13 September 2025 00:53:25 +0000 (0:00:00.284) 0:06:56.448 **** 2025-09-13 00:57:45.170743 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.170748 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.170753 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.170758 | orchestrator | 2025-09-13 00:57:45.170763 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-13 00:57:45.170767 | orchestrator | Saturday 13 September 2025 00:53:25 +0000 (0:00:00.529) 0:06:56.977 **** 2025-09-13 00:57:45.170772 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.170777 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.170782 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.170786 | orchestrator | 2025-09-13 00:57:45.170791 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-13 00:57:45.170796 | orchestrator | Saturday 13 September 2025 00:53:26 +0000 (0:00:00.303) 0:06:57.281 **** 2025-09-13 00:57:45.170801 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.170805 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.170810 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.170815 | orchestrator | 2025-09-13 00:57:45.170820 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-13 00:57:45.170825 | orchestrator | Saturday 13 September 2025 00:53:26 +0000 (0:00:00.654) 0:06:57.935 **** 2025-09-13 00:57:45.170829 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.170834 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.170839 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.170844 | orchestrator | 2025-09-13 00:57:45.170849 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-13 00:57:45.170864 | orchestrator | Saturday 13 September 2025 00:53:27 +0000 (0:00:00.702) 0:06:58.637 **** 2025-09-13 00:57:45.170869 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.170874 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.170879 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.170884 | orchestrator | 2025-09-13 00:57:45.170889 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-13 00:57:45.170893 | orchestrator | Saturday 13 September 2025 00:53:28 +0000 (0:00:00.541) 0:06:59.179 **** 2025-09-13 00:57:45.170898 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.170903 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.170908 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.170912 | orchestrator | 2025-09-13 00:57:45.170917 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-13 00:57:45.170922 | orchestrator | Saturday 13 September 2025 00:53:28 +0000 (0:00:00.322) 0:06:59.501 **** 2025-09-13 00:57:45.170927 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.170932 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.170936 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.170941 | orchestrator | 2025-09-13 00:57:45.170946 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-13 00:57:45.170951 | orchestrator | Saturday 13 September 2025 00:53:28 +0000 (0:00:00.298) 0:06:59.800 **** 2025-09-13 00:57:45.170956 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.170964 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.170969 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.170973 | orchestrator | 2025-09-13 00:57:45.170978 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-13 00:57:45.170983 | orchestrator | Saturday 13 September 2025 00:53:29 +0000 (0:00:00.314) 0:07:00.115 **** 2025-09-13 00:57:45.170988 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.170992 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.170997 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.171002 | orchestrator | 2025-09-13 00:57:45.171009 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-13 00:57:45.171014 | orchestrator | Saturday 13 September 2025 00:53:29 +0000 (0:00:00.561) 0:07:00.676 **** 2025-09-13 00:57:45.171019 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.171024 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.171028 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.171033 | orchestrator | 2025-09-13 00:57:45.171038 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-13 00:57:45.171043 | orchestrator | Saturday 13 September 2025 00:53:29 +0000 (0:00:00.306) 0:07:00.983 **** 2025-09-13 00:57:45.171047 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.171052 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.171057 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.171062 | orchestrator | 2025-09-13 00:57:45.171066 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-13 00:57:45.171071 | orchestrator | Saturday 13 September 2025 00:53:30 +0000 (0:00:00.287) 0:07:01.271 **** 2025-09-13 00:57:45.171076 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.171081 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.171085 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.171090 | orchestrator | 2025-09-13 00:57:45.171098 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-13 00:57:45.171103 | orchestrator | Saturday 13 September 2025 00:53:30 +0000 (0:00:00.315) 0:07:01.586 **** 2025-09-13 00:57:45.171107 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.171112 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.171117 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.171122 | orchestrator | 2025-09-13 00:57:45.171127 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-13 00:57:45.171131 | orchestrator | Saturday 13 September 2025 00:53:31 +0000 (0:00:00.580) 0:07:02.166 **** 2025-09-13 00:57:45.171136 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.171141 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.171146 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.171151 | orchestrator | 2025-09-13 00:57:45.171155 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-09-13 00:57:45.171160 | orchestrator | Saturday 13 September 2025 00:53:31 +0000 (0:00:00.565) 0:07:02.732 **** 2025-09-13 00:57:45.171165 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.171170 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.171174 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.171179 | orchestrator | 2025-09-13 00:57:45.171184 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-09-13 00:57:45.171189 | orchestrator | Saturday 13 September 2025 00:53:32 +0000 (0:00:00.322) 0:07:03.054 **** 2025-09-13 00:57:45.171194 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-13 00:57:45.171199 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-13 00:57:45.171203 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-13 00:57:45.171208 | orchestrator | 2025-09-13 00:57:45.171213 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-09-13 00:57:45.171218 | orchestrator | Saturday 13 September 2025 00:53:32 +0000 (0:00:00.859) 0:07:03.913 **** 2025-09-13 00:57:45.171226 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:57:45.171231 | orchestrator | 2025-09-13 00:57:45.171236 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-09-13 00:57:45.171241 | orchestrator | Saturday 13 September 2025 00:53:33 +0000 (0:00:00.798) 0:07:04.712 **** 2025-09-13 00:57:45.171245 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.171250 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.171255 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.171260 | orchestrator | 2025-09-13 00:57:45.171265 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-09-13 00:57:45.171269 | orchestrator | Saturday 13 September 2025 00:53:34 +0000 (0:00:00.307) 0:07:05.019 **** 2025-09-13 00:57:45.171274 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.171279 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.171284 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.171288 | orchestrator | 2025-09-13 00:57:45.171293 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-09-13 00:57:45.171298 | orchestrator | Saturday 13 September 2025 00:53:34 +0000 (0:00:00.299) 0:07:05.319 **** 2025-09-13 00:57:45.171303 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.171307 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.171312 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.171317 | orchestrator | 2025-09-13 00:57:45.171322 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-09-13 00:57:45.171326 | orchestrator | Saturday 13 September 2025 00:53:35 +0000 (0:00:00.842) 0:07:06.162 **** 2025-09-13 00:57:45.171331 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.171336 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.171341 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.171345 | orchestrator | 2025-09-13 00:57:45.171350 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-09-13 00:57:45.171355 | orchestrator | Saturday 13 September 2025 00:53:35 +0000 (0:00:00.357) 0:07:06.520 **** 2025-09-13 00:57:45.171360 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-13 00:57:45.171365 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-13 00:57:45.171369 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-13 00:57:45.171374 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-13 00:57:45.171379 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-13 00:57:45.171384 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-13 00:57:45.171392 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-13 00:57:45.171397 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-13 00:57:45.171402 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-13 00:57:45.171407 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-13 00:57:45.171412 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-13 00:57:45.171417 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-13 00:57:45.171421 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-13 00:57:45.171426 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-13 00:57:45.171431 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-13 00:57:45.171436 | orchestrator | 2025-09-13 00:57:45.171443 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-09-13 00:57:45.171452 | orchestrator | Saturday 13 September 2025 00:53:38 +0000 (0:00:03.061) 0:07:09.582 **** 2025-09-13 00:57:45.171457 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.171461 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.171466 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.171471 | orchestrator | 2025-09-13 00:57:45.171476 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-09-13 00:57:45.171480 | orchestrator | Saturday 13 September 2025 00:53:38 +0000 (0:00:00.280) 0:07:09.862 **** 2025-09-13 00:57:45.171485 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:57:45.171490 | orchestrator | 2025-09-13 00:57:45.171495 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-09-13 00:57:45.171499 | orchestrator | Saturday 13 September 2025 00:53:39 +0000 (0:00:00.775) 0:07:10.637 **** 2025-09-13 00:57:45.171504 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-13 00:57:45.171509 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-13 00:57:45.171514 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-13 00:57:45.171518 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-09-13 00:57:45.171523 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-09-13 00:57:45.171528 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-09-13 00:57:45.171533 | orchestrator | 2025-09-13 00:57:45.171538 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-09-13 00:57:45.171542 | orchestrator | Saturday 13 September 2025 00:53:40 +0000 (0:00:00.919) 0:07:11.557 **** 2025-09-13 00:57:45.171547 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-13 00:57:45.171552 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-13 00:57:45.171557 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-13 00:57:45.171562 | orchestrator | 2025-09-13 00:57:45.171566 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-09-13 00:57:45.171571 | orchestrator | Saturday 13 September 2025 00:53:42 +0000 (0:00:02.008) 0:07:13.565 **** 2025-09-13 00:57:45.171576 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-13 00:57:45.171581 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-13 00:57:45.171586 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:57:45.171591 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-13 00:57:45.171595 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-13 00:57:45.171600 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:57:45.171605 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-13 00:57:45.171610 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-13 00:57:45.171614 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:57:45.171619 | orchestrator | 2025-09-13 00:57:45.171624 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-09-13 00:57:45.171629 | orchestrator | Saturday 13 September 2025 00:53:43 +0000 (0:00:01.199) 0:07:14.765 **** 2025-09-13 00:57:45.171633 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-13 00:57:45.171638 | orchestrator | 2025-09-13 00:57:45.171643 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-09-13 00:57:45.171648 | orchestrator | Saturday 13 September 2025 00:53:46 +0000 (0:00:02.678) 0:07:17.443 **** 2025-09-13 00:57:45.171653 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:57:45.171657 | orchestrator | 2025-09-13 00:57:45.171662 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-09-13 00:57:45.171667 | orchestrator | Saturday 13 September 2025 00:53:47 +0000 (0:00:00.570) 0:07:18.014 **** 2025-09-13 00:57:45.171672 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b9d4bd55-4398-5073-b181-64dcd216e500', 'data_vg': 'ceph-b9d4bd55-4398-5073-b181-64dcd216e500'}) 2025-09-13 00:57:45.171681 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-741132e6-4e77-5ad5-aab1-a12c98657a1e', 'data_vg': 'ceph-741132e6-4e77-5ad5-aab1-a12c98657a1e'}) 2025-09-13 00:57:45.171686 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4283f495-c022-53d0-a3fe-4c36d70cad8f', 'data_vg': 'ceph-4283f495-c022-53d0-a3fe-4c36d70cad8f'}) 2025-09-13 00:57:45.171693 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c9c3f5f4-a401-5886-82fa-33c7ca08590f', 'data_vg': 'ceph-c9c3f5f4-a401-5886-82fa-33c7ca08590f'}) 2025-09-13 00:57:45.171698 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b087737a-96b5-5170-ab1c-c312068a0bca', 'data_vg': 'ceph-b087737a-96b5-5170-ab1c-c312068a0bca'}) 2025-09-13 00:57:45.171703 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a', 'data_vg': 'ceph-7339ba9f-b6a9-52d7-bde1-e21ae438ff7a'}) 2025-09-13 00:57:45.171707 | orchestrator | 2025-09-13 00:57:45.171712 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-09-13 00:57:45.171717 | orchestrator | Saturday 13 September 2025 00:54:27 +0000 (0:00:40.646) 0:07:58.660 **** 2025-09-13 00:57:45.171722 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.171726 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.171731 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.171736 | orchestrator | 2025-09-13 00:57:45.171740 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-09-13 00:57:45.171748 | orchestrator | Saturday 13 September 2025 00:54:28 +0000 (0:00:00.581) 0:07:59.241 **** 2025-09-13 00:57:45.171752 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:57:45.171757 | orchestrator | 2025-09-13 00:57:45.171762 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-09-13 00:57:45.171767 | orchestrator | Saturday 13 September 2025 00:54:28 +0000 (0:00:00.551) 0:07:59.793 **** 2025-09-13 00:57:45.171772 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.171776 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.171781 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.171786 | orchestrator | 2025-09-13 00:57:45.171791 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-09-13 00:57:45.171795 | orchestrator | Saturday 13 September 2025 00:54:29 +0000 (0:00:00.701) 0:08:00.494 **** 2025-09-13 00:57:45.171800 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.171805 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.171810 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.171814 | orchestrator | 2025-09-13 00:57:45.171819 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-09-13 00:57:45.171824 | orchestrator | Saturday 13 September 2025 00:54:32 +0000 (0:00:02.801) 0:08:03.295 **** 2025-09-13 00:57:45.171829 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:57:45.171834 | orchestrator | 2025-09-13 00:57:45.171838 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-09-13 00:57:45.171843 | orchestrator | Saturday 13 September 2025 00:54:32 +0000 (0:00:00.509) 0:08:03.805 **** 2025-09-13 00:57:45.171848 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:57:45.171865 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:57:45.171870 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:57:45.171875 | orchestrator | 2025-09-13 00:57:45.171880 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-09-13 00:57:45.171884 | orchestrator | Saturday 13 September 2025 00:54:33 +0000 (0:00:01.113) 0:08:04.918 **** 2025-09-13 00:57:45.171889 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:57:45.171894 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:57:45.171899 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:57:45.171907 | orchestrator | 2025-09-13 00:57:45.171911 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-09-13 00:57:45.171916 | orchestrator | Saturday 13 September 2025 00:54:35 +0000 (0:00:01.403) 0:08:06.322 **** 2025-09-13 00:57:45.171921 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:57:45.171926 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:57:45.171930 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:57:45.171935 | orchestrator | 2025-09-13 00:57:45.171940 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-09-13 00:57:45.171945 | orchestrator | Saturday 13 September 2025 00:54:37 +0000 (0:00:01.743) 0:08:08.065 **** 2025-09-13 00:57:45.171949 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.171954 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.171959 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.171964 | orchestrator | 2025-09-13 00:57:45.171969 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-09-13 00:57:45.171974 | orchestrator | Saturday 13 September 2025 00:54:37 +0000 (0:00:00.340) 0:08:08.405 **** 2025-09-13 00:57:45.171978 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.171983 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.171988 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.171993 | orchestrator | 2025-09-13 00:57:45.171997 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-09-13 00:57:45.172002 | orchestrator | Saturday 13 September 2025 00:54:37 +0000 (0:00:00.356) 0:08:08.762 **** 2025-09-13 00:57:45.172007 | orchestrator | ok: [testbed-node-3] => (item=1) 2025-09-13 00:57:45.172012 | orchestrator | ok: [testbed-node-4] => (item=2) 2025-09-13 00:57:45.172016 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-09-13 00:57:45.172021 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-09-13 00:57:45.172026 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-09-13 00:57:45.172030 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-13 00:57:45.172035 | orchestrator | 2025-09-13 00:57:45.172040 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-09-13 00:57:45.172045 | orchestrator | Saturday 13 September 2025 00:54:38 +0000 (0:00:01.238) 0:08:10.001 **** 2025-09-13 00:57:45.172049 | orchestrator | changed: [testbed-node-3] => (item=1) 2025-09-13 00:57:45.172054 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-09-13 00:57:45.172059 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-09-13 00:57:45.172064 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-09-13 00:57:45.172068 | orchestrator | changed: [testbed-node-5] => (item=0) 2025-09-13 00:57:45.172073 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-09-13 00:57:45.172078 | orchestrator | 2025-09-13 00:57:45.172085 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-09-13 00:57:45.172090 | orchestrator | Saturday 13 September 2025 00:54:41 +0000 (0:00:02.218) 0:08:12.219 **** 2025-09-13 00:57:45.172095 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-09-13 00:57:45.172099 | orchestrator | changed: [testbed-node-3] => (item=1) 2025-09-13 00:57:45.172104 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-09-13 00:57:45.172109 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-09-13 00:57:45.172114 | orchestrator | changed: [testbed-node-5] => (item=0) 2025-09-13 00:57:45.172118 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-09-13 00:57:45.172123 | orchestrator | 2025-09-13 00:57:45.172128 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-09-13 00:57:45.172133 | orchestrator | Saturday 13 September 2025 00:54:44 +0000 (0:00:03.425) 0:08:15.645 **** 2025-09-13 00:57:45.172138 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.172142 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.172147 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-13 00:57:45.172152 | orchestrator | 2025-09-13 00:57:45.172157 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-09-13 00:57:45.172169 | orchestrator | Saturday 13 September 2025 00:54:46 +0000 (0:00:02.341) 0:08:17.986 **** 2025-09-13 00:57:45.172174 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.172179 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.172184 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-09-13 00:57:45.172188 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-13 00:57:45.172193 | orchestrator | 2025-09-13 00:57:45.172198 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-09-13 00:57:45.172203 | orchestrator | Saturday 13 September 2025 00:55:00 +0000 (0:00:13.073) 0:08:31.060 **** 2025-09-13 00:57:45.172207 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.172212 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.172217 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.172222 | orchestrator | 2025-09-13 00:57:45.172226 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-13 00:57:45.172231 | orchestrator | Saturday 13 September 2025 00:55:00 +0000 (0:00:00.842) 0:08:31.902 **** 2025-09-13 00:57:45.172236 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.172241 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.172245 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.172250 | orchestrator | 2025-09-13 00:57:45.172255 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-13 00:57:45.172260 | orchestrator | Saturday 13 September 2025 00:55:01 +0000 (0:00:00.608) 0:08:32.511 **** 2025-09-13 00:57:45.172265 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:57:45.172269 | orchestrator | 2025-09-13 00:57:45.172274 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-13 00:57:45.172279 | orchestrator | Saturday 13 September 2025 00:55:02 +0000 (0:00:00.598) 0:08:33.109 **** 2025-09-13 00:57:45.172284 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-13 00:57:45.172288 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-13 00:57:45.172293 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-13 00:57:45.172298 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.172302 | orchestrator | 2025-09-13 00:57:45.172307 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-13 00:57:45.172312 | orchestrator | Saturday 13 September 2025 00:55:02 +0000 (0:00:00.382) 0:08:33.491 **** 2025-09-13 00:57:45.172317 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.172321 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.172326 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.172331 | orchestrator | 2025-09-13 00:57:45.172335 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-13 00:57:45.172340 | orchestrator | Saturday 13 September 2025 00:55:02 +0000 (0:00:00.296) 0:08:33.787 **** 2025-09-13 00:57:45.172345 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.172350 | orchestrator | 2025-09-13 00:57:45.172354 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-13 00:57:45.172359 | orchestrator | Saturday 13 September 2025 00:55:02 +0000 (0:00:00.220) 0:08:34.008 **** 2025-09-13 00:57:45.172364 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.172369 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.172373 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.172378 | orchestrator | 2025-09-13 00:57:45.172383 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-13 00:57:45.172388 | orchestrator | Saturday 13 September 2025 00:55:03 +0000 (0:00:00.592) 0:08:34.600 **** 2025-09-13 00:57:45.172392 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.172397 | orchestrator | 2025-09-13 00:57:45.172402 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-13 00:57:45.172412 | orchestrator | Saturday 13 September 2025 00:55:03 +0000 (0:00:00.234) 0:08:34.834 **** 2025-09-13 00:57:45.172417 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.172422 | orchestrator | 2025-09-13 00:57:45.172427 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-13 00:57:45.172432 | orchestrator | Saturday 13 September 2025 00:55:04 +0000 (0:00:00.231) 0:08:35.065 **** 2025-09-13 00:57:45.172436 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.172441 | orchestrator | 2025-09-13 00:57:45.172446 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-13 00:57:45.172451 | orchestrator | Saturday 13 September 2025 00:55:04 +0000 (0:00:00.116) 0:08:35.182 **** 2025-09-13 00:57:45.172455 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.172460 | orchestrator | 2025-09-13 00:57:45.172465 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-13 00:57:45.172470 | orchestrator | Saturday 13 September 2025 00:55:04 +0000 (0:00:00.208) 0:08:35.391 **** 2025-09-13 00:57:45.172477 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.172482 | orchestrator | 2025-09-13 00:57:45.172487 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-13 00:57:45.172491 | orchestrator | Saturday 13 September 2025 00:55:04 +0000 (0:00:00.211) 0:08:35.602 **** 2025-09-13 00:57:45.172496 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-13 00:57:45.172501 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-13 00:57:45.172506 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-13 00:57:45.172510 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.172515 | orchestrator | 2025-09-13 00:57:45.172520 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-13 00:57:45.172525 | orchestrator | Saturday 13 September 2025 00:55:04 +0000 (0:00:00.368) 0:08:35.971 **** 2025-09-13 00:57:45.172530 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.172534 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.172539 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.172544 | orchestrator | 2025-09-13 00:57:45.172548 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-13 00:57:45.172556 | orchestrator | Saturday 13 September 2025 00:55:05 +0000 (0:00:00.290) 0:08:36.262 **** 2025-09-13 00:57:45.172561 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.172566 | orchestrator | 2025-09-13 00:57:45.172570 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-13 00:57:45.172575 | orchestrator | Saturday 13 September 2025 00:55:06 +0000 (0:00:00.778) 0:08:37.040 **** 2025-09-13 00:57:45.172580 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.172585 | orchestrator | 2025-09-13 00:57:45.172589 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-09-13 00:57:45.172594 | orchestrator | 2025-09-13 00:57:45.172599 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-13 00:57:45.172604 | orchestrator | Saturday 13 September 2025 00:55:06 +0000 (0:00:00.656) 0:08:37.696 **** 2025-09-13 00:57:45.172608 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:57:45.172614 | orchestrator | 2025-09-13 00:57:45.172618 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-13 00:57:45.172623 | orchestrator | Saturday 13 September 2025 00:55:07 +0000 (0:00:01.200) 0:08:38.897 **** 2025-09-13 00:57:45.172628 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:57:45.172633 | orchestrator | 2025-09-13 00:57:45.172638 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-13 00:57:45.172643 | orchestrator | Saturday 13 September 2025 00:55:09 +0000 (0:00:01.208) 0:08:40.105 **** 2025-09-13 00:57:45.172651 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.172656 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.172661 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.172666 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.172671 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.172676 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.172680 | orchestrator | 2025-09-13 00:57:45.172685 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-13 00:57:45.172690 | orchestrator | Saturday 13 September 2025 00:55:10 +0000 (0:00:01.208) 0:08:41.313 **** 2025-09-13 00:57:45.172695 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.172699 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.172704 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.172709 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.172714 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.172718 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.172723 | orchestrator | 2025-09-13 00:57:45.172728 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-13 00:57:45.172733 | orchestrator | Saturday 13 September 2025 00:55:11 +0000 (0:00:00.711) 0:08:42.025 **** 2025-09-13 00:57:45.172738 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.172742 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.172747 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.172752 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.172756 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.172761 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.172766 | orchestrator | 2025-09-13 00:57:45.172771 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-13 00:57:45.172775 | orchestrator | Saturday 13 September 2025 00:55:11 +0000 (0:00:00.708) 0:08:42.734 **** 2025-09-13 00:57:45.172780 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.172785 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.172790 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.172795 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.172799 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.172804 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.172809 | orchestrator | 2025-09-13 00:57:45.172814 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-13 00:57:45.172818 | orchestrator | Saturday 13 September 2025 00:55:12 +0000 (0:00:00.950) 0:08:43.684 **** 2025-09-13 00:57:45.172823 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.172828 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.172833 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.172838 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.172842 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.172847 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.172852 | orchestrator | 2025-09-13 00:57:45.172882 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-13 00:57:45.172887 | orchestrator | Saturday 13 September 2025 00:55:13 +0000 (0:00:00.852) 0:08:44.537 **** 2025-09-13 00:57:45.172892 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.172897 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.172901 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.172906 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.172911 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.172918 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.172923 | orchestrator | 2025-09-13 00:57:45.172928 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-13 00:57:45.172932 | orchestrator | Saturday 13 September 2025 00:55:14 +0000 (0:00:00.680) 0:08:45.218 **** 2025-09-13 00:57:45.172937 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.172942 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.172946 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.172951 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.172960 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.172965 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.172969 | orchestrator | 2025-09-13 00:57:45.172974 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-13 00:57:45.172979 | orchestrator | Saturday 13 September 2025 00:55:14 +0000 (0:00:00.486) 0:08:45.705 **** 2025-09-13 00:57:45.172984 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.172988 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.172993 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.172998 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.173002 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.173007 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.173012 | orchestrator | 2025-09-13 00:57:45.173019 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-13 00:57:45.173024 | orchestrator | Saturday 13 September 2025 00:55:15 +0000 (0:00:01.146) 0:08:46.852 **** 2025-09-13 00:57:45.173029 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.173034 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.173038 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.173043 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.173047 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.173052 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.173057 | orchestrator | 2025-09-13 00:57:45.173061 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-13 00:57:45.173066 | orchestrator | Saturday 13 September 2025 00:55:16 +0000 (0:00:01.058) 0:08:47.910 **** 2025-09-13 00:57:45.173071 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.173076 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.173080 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.173085 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.173090 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.173094 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.173099 | orchestrator | 2025-09-13 00:57:45.173103 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-13 00:57:45.173108 | orchestrator | Saturday 13 September 2025 00:55:17 +0000 (0:00:00.916) 0:08:48.827 **** 2025-09-13 00:57:45.173113 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.173118 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.173122 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.173127 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.173131 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.173136 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.173141 | orchestrator | 2025-09-13 00:57:45.173146 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-13 00:57:45.173150 | orchestrator | Saturday 13 September 2025 00:55:18 +0000 (0:00:00.570) 0:08:49.397 **** 2025-09-13 00:57:45.173155 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.173160 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.173164 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.173169 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.173173 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.173178 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.173183 | orchestrator | 2025-09-13 00:57:45.173187 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-13 00:57:45.173192 | orchestrator | Saturday 13 September 2025 00:55:19 +0000 (0:00:00.832) 0:08:50.230 **** 2025-09-13 00:57:45.173197 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.173202 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.173206 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.173211 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.173215 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.173220 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.173225 | orchestrator | 2025-09-13 00:57:45.173230 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-13 00:57:45.173234 | orchestrator | Saturday 13 September 2025 00:55:19 +0000 (0:00:00.608) 0:08:50.838 **** 2025-09-13 00:57:45.173243 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.173248 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.173252 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.173257 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.173262 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.173266 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.173271 | orchestrator | 2025-09-13 00:57:45.173276 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-13 00:57:45.173281 | orchestrator | Saturday 13 September 2025 00:55:20 +0000 (0:00:00.896) 0:08:51.735 **** 2025-09-13 00:57:45.173285 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.173290 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.173294 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.173299 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.173304 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.173308 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.173313 | orchestrator | 2025-09-13 00:57:45.173318 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-13 00:57:45.173323 | orchestrator | Saturday 13 September 2025 00:55:21 +0000 (0:00:00.481) 0:08:52.216 **** 2025-09-13 00:57:45.173327 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.173332 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.173336 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.173341 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:57:45.173346 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:57:45.173350 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:57:45.173355 | orchestrator | 2025-09-13 00:57:45.173360 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-13 00:57:45.173364 | orchestrator | Saturday 13 September 2025 00:55:21 +0000 (0:00:00.647) 0:08:52.863 **** 2025-09-13 00:57:45.173369 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.173374 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.173378 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.173385 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.173390 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.173395 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.173400 | orchestrator | 2025-09-13 00:57:45.173405 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-13 00:57:45.173409 | orchestrator | Saturday 13 September 2025 00:55:22 +0000 (0:00:00.448) 0:08:53.312 **** 2025-09-13 00:57:45.173414 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.173418 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.173423 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.173427 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.173431 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.173436 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.173440 | orchestrator | 2025-09-13 00:57:45.173445 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-13 00:57:45.173449 | orchestrator | Saturday 13 September 2025 00:55:22 +0000 (0:00:00.654) 0:08:53.967 **** 2025-09-13 00:57:45.173454 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.173458 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.173462 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.173467 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.173471 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.173476 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.173480 | orchestrator | 2025-09-13 00:57:45.173487 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-09-13 00:57:45.173492 | orchestrator | Saturday 13 September 2025 00:55:24 +0000 (0:00:01.103) 0:08:55.070 **** 2025-09-13 00:57:45.173496 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-13 00:57:45.173501 | orchestrator | 2025-09-13 00:57:45.173505 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-09-13 00:57:45.173513 | orchestrator | Saturday 13 September 2025 00:55:28 +0000 (0:00:04.311) 0:08:59.382 **** 2025-09-13 00:57:45.173518 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-13 00:57:45.173522 | orchestrator | 2025-09-13 00:57:45.173526 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-09-13 00:57:45.173531 | orchestrator | Saturday 13 September 2025 00:55:30 +0000 (0:00:02.073) 0:09:01.455 **** 2025-09-13 00:57:45.173535 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:57:45.173540 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:57:45.173544 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:57:45.173549 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.173553 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:57:45.173557 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:57:45.173562 | orchestrator | 2025-09-13 00:57:45.173566 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-09-13 00:57:45.173571 | orchestrator | Saturday 13 September 2025 00:55:31 +0000 (0:00:01.475) 0:09:02.931 **** 2025-09-13 00:57:45.173575 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:57:45.173580 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:57:45.173584 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:57:45.173588 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:57:45.173593 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:57:45.173597 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:57:45.173602 | orchestrator | 2025-09-13 00:57:45.173606 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-09-13 00:57:45.173611 | orchestrator | Saturday 13 September 2025 00:55:33 +0000 (0:00:01.200) 0:09:04.131 **** 2025-09-13 00:57:45.173615 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:57:45.173620 | orchestrator | 2025-09-13 00:57:45.173625 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-09-13 00:57:45.173629 | orchestrator | Saturday 13 September 2025 00:55:34 +0000 (0:00:01.257) 0:09:05.389 **** 2025-09-13 00:57:45.173634 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:57:45.173638 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:57:45.173643 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:57:45.173647 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:57:45.173651 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:57:45.173656 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:57:45.173660 | orchestrator | 2025-09-13 00:57:45.173665 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-09-13 00:57:45.173669 | orchestrator | Saturday 13 September 2025 00:55:35 +0000 (0:00:01.465) 0:09:06.855 **** 2025-09-13 00:57:45.173674 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:57:45.173678 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:57:45.173683 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:57:45.173687 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:57:45.173691 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:57:45.173696 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:57:45.173700 | orchestrator | 2025-09-13 00:57:45.173705 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-09-13 00:57:45.173709 | orchestrator | Saturday 13 September 2025 00:55:39 +0000 (0:00:03.644) 0:09:10.499 **** 2025-09-13 00:57:45.173714 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:57:45.173719 | orchestrator | 2025-09-13 00:57:45.173723 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-09-13 00:57:45.173727 | orchestrator | Saturday 13 September 2025 00:55:40 +0000 (0:00:01.217) 0:09:11.717 **** 2025-09-13 00:57:45.173732 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.173736 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.173745 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.173749 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.173754 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.173758 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.173763 | orchestrator | 2025-09-13 00:57:45.173767 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-09-13 00:57:45.173772 | orchestrator | Saturday 13 September 2025 00:55:41 +0000 (0:00:00.599) 0:09:12.316 **** 2025-09-13 00:57:45.173776 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:57:45.173781 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:57:45.173785 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:57:45.173792 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:57:45.173797 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:57:45.173801 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:57:45.173805 | orchestrator | 2025-09-13 00:57:45.173810 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-09-13 00:57:45.173814 | orchestrator | Saturday 13 September 2025 00:55:43 +0000 (0:00:02.373) 0:09:14.690 **** 2025-09-13 00:57:45.173819 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.173823 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.173828 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.173832 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:57:45.173837 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:57:45.173841 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:57:45.173846 | orchestrator | 2025-09-13 00:57:45.173850 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-09-13 00:57:45.173864 | orchestrator | 2025-09-13 00:57:45.173869 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-13 00:57:45.173873 | orchestrator | Saturday 13 September 2025 00:55:44 +0000 (0:00:00.804) 0:09:15.495 **** 2025-09-13 00:57:45.173878 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:57:45.173882 | orchestrator | 2025-09-13 00:57:45.173890 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-13 00:57:45.173895 | orchestrator | Saturday 13 September 2025 00:55:45 +0000 (0:00:00.807) 0:09:16.303 **** 2025-09-13 00:57:45.173899 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:57:45.173904 | orchestrator | 2025-09-13 00:57:45.173908 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-13 00:57:45.173913 | orchestrator | Saturday 13 September 2025 00:55:45 +0000 (0:00:00.532) 0:09:16.835 **** 2025-09-13 00:57:45.173917 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.173922 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.173926 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.173931 | orchestrator | 2025-09-13 00:57:45.173935 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-13 00:57:45.173940 | orchestrator | Saturday 13 September 2025 00:55:46 +0000 (0:00:00.556) 0:09:17.391 **** 2025-09-13 00:57:45.173944 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.173949 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.173954 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.173958 | orchestrator | 2025-09-13 00:57:45.173963 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-13 00:57:45.173967 | orchestrator | Saturday 13 September 2025 00:55:47 +0000 (0:00:00.721) 0:09:18.113 **** 2025-09-13 00:57:45.173972 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.173976 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.173981 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.173985 | orchestrator | 2025-09-13 00:57:45.173990 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-13 00:57:45.173995 | orchestrator | Saturday 13 September 2025 00:55:47 +0000 (0:00:00.688) 0:09:18.802 **** 2025-09-13 00:57:45.173999 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.174007 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.174011 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.174030 | orchestrator | 2025-09-13 00:57:45.174035 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-13 00:57:45.174039 | orchestrator | Saturday 13 September 2025 00:55:48 +0000 (0:00:00.756) 0:09:19.559 **** 2025-09-13 00:57:45.174044 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.174048 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.174053 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.174057 | orchestrator | 2025-09-13 00:57:45.174062 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-13 00:57:45.174066 | orchestrator | Saturday 13 September 2025 00:55:49 +0000 (0:00:00.551) 0:09:20.110 **** 2025-09-13 00:57:45.174071 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.174076 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.174080 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.174085 | orchestrator | 2025-09-13 00:57:45.174089 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-13 00:57:45.174094 | orchestrator | Saturday 13 September 2025 00:55:49 +0000 (0:00:00.306) 0:09:20.417 **** 2025-09-13 00:57:45.174098 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.174103 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.174107 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.174112 | orchestrator | 2025-09-13 00:57:45.174116 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-13 00:57:45.174121 | orchestrator | Saturday 13 September 2025 00:55:49 +0000 (0:00:00.305) 0:09:20.723 **** 2025-09-13 00:57:45.174126 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.174130 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.174135 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.174139 | orchestrator | 2025-09-13 00:57:45.174144 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-13 00:57:45.174148 | orchestrator | Saturday 13 September 2025 00:55:50 +0000 (0:00:00.736) 0:09:21.459 **** 2025-09-13 00:57:45.174153 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.174157 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.174162 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.174166 | orchestrator | 2025-09-13 00:57:45.174171 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-13 00:57:45.174175 | orchestrator | Saturday 13 September 2025 00:55:51 +0000 (0:00:01.035) 0:09:22.494 **** 2025-09-13 00:57:45.174180 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.174184 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.174189 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.174193 | orchestrator | 2025-09-13 00:57:45.174198 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-13 00:57:45.174203 | orchestrator | Saturday 13 September 2025 00:55:51 +0000 (0:00:00.340) 0:09:22.835 **** 2025-09-13 00:57:45.174207 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.174212 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.174216 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.174221 | orchestrator | 2025-09-13 00:57:45.174228 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-13 00:57:45.174232 | orchestrator | Saturday 13 September 2025 00:55:52 +0000 (0:00:00.367) 0:09:23.202 **** 2025-09-13 00:57:45.174237 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.174242 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.174246 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.174251 | orchestrator | 2025-09-13 00:57:45.174255 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-13 00:57:45.174260 | orchestrator | Saturday 13 September 2025 00:55:52 +0000 (0:00:00.392) 0:09:23.595 **** 2025-09-13 00:57:45.174265 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.174269 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.174274 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.174282 | orchestrator | 2025-09-13 00:57:45.174287 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-13 00:57:45.174291 | orchestrator | Saturday 13 September 2025 00:55:53 +0000 (0:00:00.719) 0:09:24.315 **** 2025-09-13 00:57:45.174296 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.174300 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.174305 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.174309 | orchestrator | 2025-09-13 00:57:45.174316 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-13 00:57:45.174321 | orchestrator | Saturday 13 September 2025 00:55:53 +0000 (0:00:00.431) 0:09:24.746 **** 2025-09-13 00:57:45.174326 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.174330 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.174335 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.174339 | orchestrator | 2025-09-13 00:57:45.174344 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-13 00:57:45.174348 | orchestrator | Saturday 13 September 2025 00:55:54 +0000 (0:00:00.346) 0:09:25.092 **** 2025-09-13 00:57:45.174353 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.174357 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.174362 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.174367 | orchestrator | 2025-09-13 00:57:45.174371 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-13 00:57:45.174376 | orchestrator | Saturday 13 September 2025 00:55:54 +0000 (0:00:00.332) 0:09:25.425 **** 2025-09-13 00:57:45.174380 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.174385 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.174389 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.174394 | orchestrator | 2025-09-13 00:57:45.174398 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-13 00:57:45.174403 | orchestrator | Saturday 13 September 2025 00:55:54 +0000 (0:00:00.557) 0:09:25.982 **** 2025-09-13 00:57:45.174407 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.174412 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.174416 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.174421 | orchestrator | 2025-09-13 00:57:45.174425 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-13 00:57:45.174430 | orchestrator | Saturday 13 September 2025 00:55:55 +0000 (0:00:00.337) 0:09:26.320 **** 2025-09-13 00:57:45.174435 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.174439 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.174444 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.174448 | orchestrator | 2025-09-13 00:57:45.174453 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-09-13 00:57:45.174457 | orchestrator | Saturday 13 September 2025 00:55:55 +0000 (0:00:00.551) 0:09:26.871 **** 2025-09-13 00:57:45.174462 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.174466 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.174471 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-09-13 00:57:45.174476 | orchestrator | 2025-09-13 00:57:45.174480 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-09-13 00:57:45.174485 | orchestrator | Saturday 13 September 2025 00:55:56 +0000 (0:00:00.679) 0:09:27.550 **** 2025-09-13 00:57:45.174489 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-13 00:57:45.174494 | orchestrator | 2025-09-13 00:57:45.174498 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-09-13 00:57:45.174503 | orchestrator | Saturday 13 September 2025 00:55:58 +0000 (0:00:02.452) 0:09:30.003 **** 2025-09-13 00:57:45.174508 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-09-13 00:57:45.174514 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.174524 | orchestrator | 2025-09-13 00:57:45.174528 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-09-13 00:57:45.174533 | orchestrator | Saturday 13 September 2025 00:55:59 +0000 (0:00:00.217) 0:09:30.221 **** 2025-09-13 00:57:45.174539 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-13 00:57:45.174548 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-13 00:57:45.174552 | orchestrator | 2025-09-13 00:57:45.174557 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-09-13 00:57:45.174562 | orchestrator | Saturday 13 September 2025 00:56:07 +0000 (0:00:07.895) 0:09:38.116 **** 2025-09-13 00:57:45.174566 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-13 00:57:45.174571 | orchestrator | 2025-09-13 00:57:45.174577 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-09-13 00:57:45.174582 | orchestrator | Saturday 13 September 2025 00:56:10 +0000 (0:00:03.745) 0:09:41.862 **** 2025-09-13 00:57:45.174587 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:57:45.174591 | orchestrator | 2025-09-13 00:57:45.174596 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-09-13 00:57:45.174600 | orchestrator | Saturday 13 September 2025 00:56:11 +0000 (0:00:00.820) 0:09:42.683 **** 2025-09-13 00:57:45.174605 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-13 00:57:45.174609 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-13 00:57:45.174614 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-13 00:57:45.174618 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-09-13 00:57:45.174623 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-09-13 00:57:45.174630 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-09-13 00:57:45.174635 | orchestrator | 2025-09-13 00:57:45.174639 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-09-13 00:57:45.174644 | orchestrator | Saturday 13 September 2025 00:56:12 +0000 (0:00:01.058) 0:09:43.741 **** 2025-09-13 00:57:45.174648 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-13 00:57:45.174653 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-13 00:57:45.174657 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-13 00:57:45.174662 | orchestrator | 2025-09-13 00:57:45.174666 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-09-13 00:57:45.174671 | orchestrator | Saturday 13 September 2025 00:56:14 +0000 (0:00:02.124) 0:09:45.866 **** 2025-09-13 00:57:45.174675 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-13 00:57:45.174680 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-13 00:57:45.174684 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:57:45.174689 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-13 00:57:45.174694 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-13 00:57:45.174698 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:57:45.174703 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-13 00:57:45.174707 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-13 00:57:45.174712 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:57:45.174716 | orchestrator | 2025-09-13 00:57:45.174721 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-09-13 00:57:45.174729 | orchestrator | Saturday 13 September 2025 00:56:16 +0000 (0:00:01.285) 0:09:47.152 **** 2025-09-13 00:57:45.174733 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:57:45.174738 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:57:45.174742 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:57:45.174747 | orchestrator | 2025-09-13 00:57:45.174751 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-09-13 00:57:45.174756 | orchestrator | Saturday 13 September 2025 00:56:18 +0000 (0:00:02.830) 0:09:49.983 **** 2025-09-13 00:57:45.174760 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.174765 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.174769 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.174774 | orchestrator | 2025-09-13 00:57:45.174778 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-09-13 00:57:45.174783 | orchestrator | Saturday 13 September 2025 00:56:19 +0000 (0:00:00.683) 0:09:50.666 **** 2025-09-13 00:57:45.174787 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:57:45.174792 | orchestrator | 2025-09-13 00:57:45.174796 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-09-13 00:57:45.174801 | orchestrator | Saturday 13 September 2025 00:56:20 +0000 (0:00:00.616) 0:09:51.282 **** 2025-09-13 00:57:45.174805 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:57:45.174810 | orchestrator | 2025-09-13 00:57:45.174814 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-09-13 00:57:45.174819 | orchestrator | Saturday 13 September 2025 00:56:21 +0000 (0:00:00.810) 0:09:52.093 **** 2025-09-13 00:57:45.174824 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:57:45.174828 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:57:45.174833 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:57:45.174837 | orchestrator | 2025-09-13 00:57:45.174842 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-09-13 00:57:45.174846 | orchestrator | Saturday 13 September 2025 00:56:22 +0000 (0:00:01.382) 0:09:53.475 **** 2025-09-13 00:57:45.174851 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:57:45.174866 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:57:45.174870 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:57:45.174875 | orchestrator | 2025-09-13 00:57:45.174880 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-09-13 00:57:45.174884 | orchestrator | Saturday 13 September 2025 00:56:23 +0000 (0:00:01.124) 0:09:54.599 **** 2025-09-13 00:57:45.174889 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:57:45.174893 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:57:45.174898 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:57:45.174902 | orchestrator | 2025-09-13 00:57:45.174907 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-09-13 00:57:45.174911 | orchestrator | Saturday 13 September 2025 00:56:25 +0000 (0:00:01.801) 0:09:56.401 **** 2025-09-13 00:57:45.174916 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:57:45.174920 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:57:45.174925 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:57:45.174929 | orchestrator | 2025-09-13 00:57:45.174936 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-09-13 00:57:45.174941 | orchestrator | Saturday 13 September 2025 00:56:27 +0000 (0:00:02.213) 0:09:58.614 **** 2025-09-13 00:57:45.174945 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.174950 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.174954 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.174959 | orchestrator | 2025-09-13 00:57:45.174963 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-13 00:57:45.174968 | orchestrator | Saturday 13 September 2025 00:56:28 +0000 (0:00:01.171) 0:09:59.786 **** 2025-09-13 00:57:45.174972 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:57:45.174980 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:57:45.174985 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:57:45.174989 | orchestrator | 2025-09-13 00:57:45.174994 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-13 00:57:45.174998 | orchestrator | Saturday 13 September 2025 00:56:29 +0000 (0:00:00.946) 0:10:00.732 **** 2025-09-13 00:57:45.175003 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:57:45.175007 | orchestrator | 2025-09-13 00:57:45.175014 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-13 00:57:45.175019 | orchestrator | Saturday 13 September 2025 00:56:30 +0000 (0:00:00.471) 0:10:01.204 **** 2025-09-13 00:57:45.175023 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.175028 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.175032 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.175037 | orchestrator | 2025-09-13 00:57:45.175041 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-13 00:57:45.175046 | orchestrator | Saturday 13 September 2025 00:56:30 +0000 (0:00:00.294) 0:10:01.499 **** 2025-09-13 00:57:45.175050 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:57:45.175055 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:57:45.175059 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:57:45.175064 | orchestrator | 2025-09-13 00:57:45.175068 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-13 00:57:45.175073 | orchestrator | Saturday 13 September 2025 00:56:31 +0000 (0:00:01.353) 0:10:02.853 **** 2025-09-13 00:57:45.175077 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-13 00:57:45.175082 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-13 00:57:45.175086 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-13 00:57:45.175091 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.175095 | orchestrator | 2025-09-13 00:57:45.175100 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-13 00:57:45.175104 | orchestrator | Saturday 13 September 2025 00:56:32 +0000 (0:00:00.582) 0:10:03.436 **** 2025-09-13 00:57:45.175109 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.175113 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.175118 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.175122 | orchestrator | 2025-09-13 00:57:45.175127 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-13 00:57:45.175131 | orchestrator | 2025-09-13 00:57:45.175136 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-13 00:57:45.175140 | orchestrator | Saturday 13 September 2025 00:56:32 +0000 (0:00:00.507) 0:10:03.943 **** 2025-09-13 00:57:45.175145 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:57:45.175149 | orchestrator | 2025-09-13 00:57:45.175154 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-13 00:57:45.175158 | orchestrator | Saturday 13 September 2025 00:56:33 +0000 (0:00:00.617) 0:10:04.561 **** 2025-09-13 00:57:45.175163 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:57:45.175168 | orchestrator | 2025-09-13 00:57:45.175172 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-13 00:57:45.175176 | orchestrator | Saturday 13 September 2025 00:56:33 +0000 (0:00:00.444) 0:10:05.005 **** 2025-09-13 00:57:45.175181 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.175185 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.175190 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.175194 | orchestrator | 2025-09-13 00:57:45.175199 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-13 00:57:45.175203 | orchestrator | Saturday 13 September 2025 00:56:34 +0000 (0:00:00.383) 0:10:05.389 **** 2025-09-13 00:57:45.175211 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.175215 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.175220 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.175224 | orchestrator | 2025-09-13 00:57:45.175229 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-13 00:57:45.175234 | orchestrator | Saturday 13 September 2025 00:56:35 +0000 (0:00:00.640) 0:10:06.030 **** 2025-09-13 00:57:45.175238 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.175243 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.175247 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.175252 | orchestrator | 2025-09-13 00:57:45.175256 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-13 00:57:45.175261 | orchestrator | Saturday 13 September 2025 00:56:35 +0000 (0:00:00.663) 0:10:06.693 **** 2025-09-13 00:57:45.175265 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.175270 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.175274 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.175279 | orchestrator | 2025-09-13 00:57:45.175283 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-13 00:57:45.175288 | orchestrator | Saturday 13 September 2025 00:56:36 +0000 (0:00:00.737) 0:10:07.430 **** 2025-09-13 00:57:45.175292 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.175297 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.175301 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.175306 | orchestrator | 2025-09-13 00:57:45.175310 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-13 00:57:45.175317 | orchestrator | Saturday 13 September 2025 00:56:36 +0000 (0:00:00.560) 0:10:07.991 **** 2025-09-13 00:57:45.175322 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.175326 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.175331 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.175335 | orchestrator | 2025-09-13 00:57:45.175340 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-13 00:57:45.175344 | orchestrator | Saturday 13 September 2025 00:56:37 +0000 (0:00:00.332) 0:10:08.324 **** 2025-09-13 00:57:45.175349 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.175353 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.175358 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.175362 | orchestrator | 2025-09-13 00:57:45.175367 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-13 00:57:45.175371 | orchestrator | Saturday 13 September 2025 00:56:37 +0000 (0:00:00.290) 0:10:08.615 **** 2025-09-13 00:57:45.175376 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.175380 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.175385 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.175389 | orchestrator | 2025-09-13 00:57:45.175394 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-13 00:57:45.175401 | orchestrator | Saturday 13 September 2025 00:56:38 +0000 (0:00:00.714) 0:10:09.329 **** 2025-09-13 00:57:45.175406 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.175410 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.175415 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.175419 | orchestrator | 2025-09-13 00:57:45.175424 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-13 00:57:45.175428 | orchestrator | Saturday 13 September 2025 00:56:39 +0000 (0:00:00.986) 0:10:10.316 **** 2025-09-13 00:57:45.175433 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.175437 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.175442 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.175446 | orchestrator | 2025-09-13 00:57:45.175451 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-13 00:57:45.175456 | orchestrator | Saturday 13 September 2025 00:56:39 +0000 (0:00:00.292) 0:10:10.609 **** 2025-09-13 00:57:45.175460 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.175465 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.175472 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.175477 | orchestrator | 2025-09-13 00:57:45.175481 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-13 00:57:45.175486 | orchestrator | Saturday 13 September 2025 00:56:39 +0000 (0:00:00.320) 0:10:10.930 **** 2025-09-13 00:57:45.175490 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.175495 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.175499 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.175504 | orchestrator | 2025-09-13 00:57:45.175509 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-13 00:57:45.175513 | orchestrator | Saturday 13 September 2025 00:56:40 +0000 (0:00:00.356) 0:10:11.286 **** 2025-09-13 00:57:45.175518 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.175522 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.175527 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.175531 | orchestrator | 2025-09-13 00:57:45.175536 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-13 00:57:45.175540 | orchestrator | Saturday 13 September 2025 00:56:40 +0000 (0:00:00.608) 0:10:11.895 **** 2025-09-13 00:57:45.175545 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.175549 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.175554 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.175558 | orchestrator | 2025-09-13 00:57:45.175563 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-13 00:57:45.175567 | orchestrator | Saturday 13 September 2025 00:56:41 +0000 (0:00:00.329) 0:10:12.224 **** 2025-09-13 00:57:45.175572 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.175576 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.175581 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.175585 | orchestrator | 2025-09-13 00:57:45.175590 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-13 00:57:45.175594 | orchestrator | Saturday 13 September 2025 00:56:41 +0000 (0:00:00.313) 0:10:12.538 **** 2025-09-13 00:57:45.175599 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.175603 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.175608 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.175612 | orchestrator | 2025-09-13 00:57:45.175617 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-13 00:57:45.175621 | orchestrator | Saturday 13 September 2025 00:56:41 +0000 (0:00:00.297) 0:10:12.835 **** 2025-09-13 00:57:45.175626 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.175630 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.175635 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.175639 | orchestrator | 2025-09-13 00:57:45.175644 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-13 00:57:45.175648 | orchestrator | Saturday 13 September 2025 00:56:42 +0000 (0:00:00.549) 0:10:13.384 **** 2025-09-13 00:57:45.175653 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.175657 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.175662 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.175666 | orchestrator | 2025-09-13 00:57:45.175671 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-13 00:57:45.175675 | orchestrator | Saturday 13 September 2025 00:56:42 +0000 (0:00:00.319) 0:10:13.704 **** 2025-09-13 00:57:45.175680 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.175684 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.175689 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.175693 | orchestrator | 2025-09-13 00:57:45.175698 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-09-13 00:57:45.175702 | orchestrator | Saturday 13 September 2025 00:56:43 +0000 (0:00:00.534) 0:10:14.238 **** 2025-09-13 00:57:45.175707 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:57:45.175712 | orchestrator | 2025-09-13 00:57:45.175716 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-13 00:57:45.175724 | orchestrator | Saturday 13 September 2025 00:56:43 +0000 (0:00:00.747) 0:10:14.985 **** 2025-09-13 00:57:45.175730 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-13 00:57:45.175735 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-13 00:57:45.175739 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-13 00:57:45.175744 | orchestrator | 2025-09-13 00:57:45.175749 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-13 00:57:45.175753 | orchestrator | Saturday 13 September 2025 00:56:46 +0000 (0:00:02.297) 0:10:17.283 **** 2025-09-13 00:57:45.175758 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-13 00:57:45.175762 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-13 00:57:45.175767 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:57:45.175771 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-13 00:57:45.175776 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-13 00:57:45.175780 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:57:45.175785 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-13 00:57:45.175789 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-13 00:57:45.175794 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:57:45.175798 | orchestrator | 2025-09-13 00:57:45.175805 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-09-13 00:57:45.175810 | orchestrator | Saturday 13 September 2025 00:56:47 +0000 (0:00:01.185) 0:10:18.468 **** 2025-09-13 00:57:45.175814 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.175819 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.175823 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.175828 | orchestrator | 2025-09-13 00:57:45.175832 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-09-13 00:57:45.175837 | orchestrator | Saturday 13 September 2025 00:56:47 +0000 (0:00:00.308) 0:10:18.777 **** 2025-09-13 00:57:45.175841 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:57:45.175846 | orchestrator | 2025-09-13 00:57:45.175850 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-09-13 00:57:45.175864 | orchestrator | Saturday 13 September 2025 00:56:48 +0000 (0:00:00.826) 0:10:19.604 **** 2025-09-13 00:57:45.175868 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-13 00:57:45.175873 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-13 00:57:45.175878 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-13 00:57:45.175882 | orchestrator | 2025-09-13 00:57:45.175887 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-09-13 00:57:45.175891 | orchestrator | Saturday 13 September 2025 00:56:49 +0000 (0:00:00.860) 0:10:20.464 **** 2025-09-13 00:57:45.175896 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-13 00:57:45.175900 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-13 00:57:45.175905 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-13 00:57:45.175909 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-13 00:57:45.175914 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-13 00:57:45.175918 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-13 00:57:45.175926 | orchestrator | 2025-09-13 00:57:45.175931 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-13 00:57:45.175935 | orchestrator | Saturday 13 September 2025 00:56:53 +0000 (0:00:04.371) 0:10:24.835 **** 2025-09-13 00:57:45.175940 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-13 00:57:45.175944 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-13 00:57:45.175949 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-13 00:57:45.175953 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-13 00:57:45.175958 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-13 00:57:45.175962 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-13 00:57:45.175967 | orchestrator | 2025-09-13 00:57:45.175971 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-13 00:57:45.175976 | orchestrator | Saturday 13 September 2025 00:56:56 +0000 (0:00:02.889) 0:10:27.725 **** 2025-09-13 00:57:45.175980 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-13 00:57:45.175985 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:57:45.175989 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-13 00:57:45.175994 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:57:45.175998 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-13 00:57:45.176003 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:57:45.176007 | orchestrator | 2025-09-13 00:57:45.176012 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-09-13 00:57:45.176016 | orchestrator | Saturday 13 September 2025 00:56:57 +0000 (0:00:01.189) 0:10:28.914 **** 2025-09-13 00:57:45.176023 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-09-13 00:57:45.176028 | orchestrator | 2025-09-13 00:57:45.176033 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-09-13 00:57:45.176037 | orchestrator | Saturday 13 September 2025 00:56:58 +0000 (0:00:00.224) 0:10:29.139 **** 2025-09-13 00:57:45.176041 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-13 00:57:45.176046 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-13 00:57:45.176051 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-13 00:57:45.176055 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-13 00:57:45.176062 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-13 00:57:45.176067 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.176072 | orchestrator | 2025-09-13 00:57:45.176076 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-09-13 00:57:45.176081 | orchestrator | Saturday 13 September 2025 00:56:58 +0000 (0:00:00.560) 0:10:29.699 **** 2025-09-13 00:57:45.176085 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-13 00:57:45.176090 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-13 00:57:45.176094 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-13 00:57:45.176099 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-13 00:57:45.176106 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-13 00:57:45.176111 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.176116 | orchestrator | 2025-09-13 00:57:45.176120 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-09-13 00:57:45.176125 | orchestrator | Saturday 13 September 2025 00:56:59 +0000 (0:00:00.562) 0:10:30.261 **** 2025-09-13 00:57:45.176129 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-13 00:57:45.176134 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-13 00:57:45.176138 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-13 00:57:45.176143 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-13 00:57:45.176147 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-13 00:57:45.176152 | orchestrator | 2025-09-13 00:57:45.176156 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-09-13 00:57:45.176161 | orchestrator | Saturday 13 September 2025 00:57:30 +0000 (0:00:31.035) 0:11:01.297 **** 2025-09-13 00:57:45.176165 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.176170 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.176174 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.176179 | orchestrator | 2025-09-13 00:57:45.176183 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-09-13 00:57:45.176188 | orchestrator | Saturday 13 September 2025 00:57:30 +0000 (0:00:00.294) 0:11:01.592 **** 2025-09-13 00:57:45.176192 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.176197 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.176201 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.176206 | orchestrator | 2025-09-13 00:57:45.176210 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-09-13 00:57:45.176215 | orchestrator | Saturday 13 September 2025 00:57:31 +0000 (0:00:00.571) 0:11:02.163 **** 2025-09-13 00:57:45.176219 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:57:45.176224 | orchestrator | 2025-09-13 00:57:45.176228 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-09-13 00:57:45.176233 | orchestrator | Saturday 13 September 2025 00:57:31 +0000 (0:00:00.567) 0:11:02.731 **** 2025-09-13 00:57:45.176237 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:57:45.176242 | orchestrator | 2025-09-13 00:57:45.176246 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-09-13 00:57:45.176251 | orchestrator | Saturday 13 September 2025 00:57:32 +0000 (0:00:00.764) 0:11:03.496 **** 2025-09-13 00:57:45.176257 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:57:45.176262 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:57:45.176266 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:57:45.176271 | orchestrator | 2025-09-13 00:57:45.176275 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-09-13 00:57:45.176280 | orchestrator | Saturday 13 September 2025 00:57:33 +0000 (0:00:01.348) 0:11:04.844 **** 2025-09-13 00:57:45.176284 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:57:45.176289 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:57:45.176293 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:57:45.176298 | orchestrator | 2025-09-13 00:57:45.176306 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-09-13 00:57:45.176310 | orchestrator | Saturday 13 September 2025 00:57:34 +0000 (0:00:01.157) 0:11:06.002 **** 2025-09-13 00:57:45.176315 | orchestrator | changed: [testbed-node-3] 2025-09-13 00:57:45.176319 | orchestrator | changed: [testbed-node-4] 2025-09-13 00:57:45.176324 | orchestrator | changed: [testbed-node-5] 2025-09-13 00:57:45.176328 | orchestrator | 2025-09-13 00:57:45.176333 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-09-13 00:57:45.176337 | orchestrator | Saturday 13 September 2025 00:57:36 +0000 (0:00:01.773) 0:11:07.775 **** 2025-09-13 00:57:45.176344 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-13 00:57:45.176349 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-13 00:57:45.176353 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-13 00:57:45.176358 | orchestrator | 2025-09-13 00:57:45.176363 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-13 00:57:45.176367 | orchestrator | Saturday 13 September 2025 00:57:39 +0000 (0:00:02.776) 0:11:10.551 **** 2025-09-13 00:57:45.176371 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.176376 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.176380 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.176385 | orchestrator | 2025-09-13 00:57:45.176389 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-13 00:57:45.176394 | orchestrator | Saturday 13 September 2025 00:57:39 +0000 (0:00:00.309) 0:11:10.860 **** 2025-09-13 00:57:45.176399 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:57:45.176403 | orchestrator | 2025-09-13 00:57:45.176408 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-13 00:57:45.176412 | orchestrator | Saturday 13 September 2025 00:57:40 +0000 (0:00:00.774) 0:11:11.635 **** 2025-09-13 00:57:45.176417 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.176421 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.176426 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.176430 | orchestrator | 2025-09-13 00:57:45.176435 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-13 00:57:45.176439 | orchestrator | Saturday 13 September 2025 00:57:40 +0000 (0:00:00.326) 0:11:11.962 **** 2025-09-13 00:57:45.176444 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.176448 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:57:45.176453 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:57:45.176457 | orchestrator | 2025-09-13 00:57:45.176462 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-13 00:57:45.176466 | orchestrator | Saturday 13 September 2025 00:57:41 +0000 (0:00:00.323) 0:11:12.285 **** 2025-09-13 00:57:45.176471 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-13 00:57:45.176475 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-13 00:57:45.176480 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-13 00:57:45.176484 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:57:45.176489 | orchestrator | 2025-09-13 00:57:45.176493 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-13 00:57:45.176497 | orchestrator | Saturday 13 September 2025 00:57:42 +0000 (0:00:01.162) 0:11:13.448 **** 2025-09-13 00:57:45.176502 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:57:45.176507 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:57:45.176511 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:57:45.176516 | orchestrator | 2025-09-13 00:57:45.176520 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:57:45.176530 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-09-13 00:57:45.176534 | orchestrator | testbed-node-1 : ok=127  changed=32  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-09-13 00:57:45.176539 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-09-13 00:57:45.176543 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-09-13 00:57:45.176548 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-09-13 00:57:45.176553 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-09-13 00:57:45.176557 | orchestrator | 2025-09-13 00:57:45.176562 | orchestrator | 2025-09-13 00:57:45.176566 | orchestrator | 2025-09-13 00:57:45.176573 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:57:45.176578 | orchestrator | Saturday 13 September 2025 00:57:42 +0000 (0:00:00.256) 0:11:13.705 **** 2025-09-13 00:57:45.176582 | orchestrator | =============================================================================== 2025-09-13 00:57:45.176587 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 60.02s 2025-09-13 00:57:45.176591 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 40.65s 2025-09-13 00:57:45.176596 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.04s 2025-09-13 00:57:45.176600 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.47s 2025-09-13 00:57:45.176605 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.93s 2025-09-13 00:57:45.176609 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 13.35s 2025-09-13 00:57:45.176614 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.07s 2025-09-13 00:57:45.176618 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.07s 2025-09-13 00:57:45.176625 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.03s 2025-09-13 00:57:45.176630 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.90s 2025-09-13 00:57:45.176634 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.94s 2025-09-13 00:57:45.176639 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.48s 2025-09-13 00:57:45.176643 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 5.61s 2025-09-13 00:57:45.176648 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.15s 2025-09-13 00:57:45.176652 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.37s 2025-09-13 00:57:45.176656 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.31s 2025-09-13 00:57:45.176661 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.99s 2025-09-13 00:57:45.176665 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.75s 2025-09-13 00:57:45.176670 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.64s 2025-09-13 00:57:45.176674 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 3.53s 2025-09-13 00:57:48.206124 | orchestrator | 2025-09-13 00:57:48 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:57:48.208226 | orchestrator | 2025-09-13 00:57:48 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:57:48.209374 | orchestrator | 2025-09-13 00:57:48 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:57:48.210686 | orchestrator | 2025-09-13 00:57:48 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:57:51.255544 | orchestrator | 2025-09-13 00:57:51 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:57:51.257174 | orchestrator | 2025-09-13 00:57:51 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:57:51.260949 | orchestrator | 2025-09-13 00:57:51 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:57:51.261239 | orchestrator | 2025-09-13 00:57:51 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:57:54.320504 | orchestrator | 2025-09-13 00:57:54 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:57:54.322874 | orchestrator | 2025-09-13 00:57:54 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:57:54.325289 | orchestrator | 2025-09-13 00:57:54 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:57:54.325310 | orchestrator | 2025-09-13 00:57:54 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:57:57.378393 | orchestrator | 2025-09-13 00:57:57 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:57:57.380097 | orchestrator | 2025-09-13 00:57:57 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:57:57.382325 | orchestrator | 2025-09-13 00:57:57 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:57:57.382406 | orchestrator | 2025-09-13 00:57:57 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:58:00.424662 | orchestrator | 2025-09-13 00:58:00 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:58:00.426110 | orchestrator | 2025-09-13 00:58:00 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:58:00.428208 | orchestrator | 2025-09-13 00:58:00 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:58:00.428616 | orchestrator | 2025-09-13 00:58:00 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:58:03.469455 | orchestrator | 2025-09-13 00:58:03 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:58:03.469552 | orchestrator | 2025-09-13 00:58:03 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:58:03.469567 | orchestrator | 2025-09-13 00:58:03 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:58:03.469578 | orchestrator | 2025-09-13 00:58:03 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:58:06.522793 | orchestrator | 2025-09-13 00:58:06 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:58:06.525722 | orchestrator | 2025-09-13 00:58:06 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:58:06.528919 | orchestrator | 2025-09-13 00:58:06 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:58:06.529634 | orchestrator | 2025-09-13 00:58:06 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:58:09.565395 | orchestrator | 2025-09-13 00:58:09 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:58:09.568293 | orchestrator | 2025-09-13 00:58:09 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:58:09.570110 | orchestrator | 2025-09-13 00:58:09 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:58:09.570241 | orchestrator | 2025-09-13 00:58:09 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:58:12.626455 | orchestrator | 2025-09-13 00:58:12 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:58:12.628619 | orchestrator | 2025-09-13 00:58:12 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:58:12.631276 | orchestrator | 2025-09-13 00:58:12 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:58:12.631728 | orchestrator | 2025-09-13 00:58:12 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:58:15.695273 | orchestrator | 2025-09-13 00:58:15 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:58:15.698090 | orchestrator | 2025-09-13 00:58:15 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:58:15.700245 | orchestrator | 2025-09-13 00:58:15 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:58:15.700271 | orchestrator | 2025-09-13 00:58:15 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:58:18.745042 | orchestrator | 2025-09-13 00:58:18 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:58:18.746569 | orchestrator | 2025-09-13 00:58:18 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:58:18.748269 | orchestrator | 2025-09-13 00:58:18 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:58:18.748368 | orchestrator | 2025-09-13 00:58:18 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:58:21.789244 | orchestrator | 2025-09-13 00:58:21 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:58:21.790958 | orchestrator | 2025-09-13 00:58:21 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:58:21.792701 | orchestrator | 2025-09-13 00:58:21 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:58:21.793017 | orchestrator | 2025-09-13 00:58:21 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:58:24.835920 | orchestrator | 2025-09-13 00:58:24 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:58:24.838009 | orchestrator | 2025-09-13 00:58:24 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:58:24.841313 | orchestrator | 2025-09-13 00:58:24 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:58:24.841351 | orchestrator | 2025-09-13 00:58:24 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:58:27.894360 | orchestrator | 2025-09-13 00:58:27 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:58:27.895246 | orchestrator | 2025-09-13 00:58:27 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:58:27.896833 | orchestrator | 2025-09-13 00:58:27 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:58:27.897250 | orchestrator | 2025-09-13 00:58:27 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:58:30.936996 | orchestrator | 2025-09-13 00:58:30 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:58:30.938249 | orchestrator | 2025-09-13 00:58:30 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:58:30.939945 | orchestrator | 2025-09-13 00:58:30 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:58:30.939969 | orchestrator | 2025-09-13 00:58:30 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:58:33.983061 | orchestrator | 2025-09-13 00:58:33 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:58:33.984177 | orchestrator | 2025-09-13 00:58:33 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:58:33.986320 | orchestrator | 2025-09-13 00:58:33 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:58:33.986450 | orchestrator | 2025-09-13 00:58:33 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:58:37.039556 | orchestrator | 2025-09-13 00:58:37 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:58:37.041182 | orchestrator | 2025-09-13 00:58:37 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state STARTED 2025-09-13 00:58:37.042636 | orchestrator | 2025-09-13 00:58:37 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:58:37.042663 | orchestrator | 2025-09-13 00:58:37 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:58:40.097355 | orchestrator | 2025-09-13 00:58:40 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:58:40.100138 | orchestrator | 2025-09-13 00:58:40 | INFO  | Task 8acf60d3-1bd8-4dbb-a4b9-f2d35a745e30 is in state SUCCESS 2025-09-13 00:58:40.101872 | orchestrator | 2025-09-13 00:58:40.101908 | orchestrator | 2025-09-13 00:58:40.101920 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-13 00:58:40.101932 | orchestrator | 2025-09-13 00:58:40.101944 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-13 00:58:40.102136 | orchestrator | Saturday 13 September 2025 00:55:45 +0000 (0:00:00.263) 0:00:00.263 **** 2025-09-13 00:58:40.102156 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:58:40.102169 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:58:40.102181 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:58:40.102192 | orchestrator | 2025-09-13 00:58:40.102203 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-13 00:58:40.102214 | orchestrator | Saturday 13 September 2025 00:55:45 +0000 (0:00:00.274) 0:00:00.537 **** 2025-09-13 00:58:40.102226 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-09-13 00:58:40.102237 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-09-13 00:58:40.102248 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-09-13 00:58:40.102259 | orchestrator | 2025-09-13 00:58:40.102270 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-09-13 00:58:40.102281 | orchestrator | 2025-09-13 00:58:40.102292 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-13 00:58:40.102303 | orchestrator | Saturday 13 September 2025 00:55:46 +0000 (0:00:00.466) 0:00:01.004 **** 2025-09-13 00:58:40.102314 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:58:40.102325 | orchestrator | 2025-09-13 00:58:40.102336 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-09-13 00:58:40.102347 | orchestrator | Saturday 13 September 2025 00:55:46 +0000 (0:00:00.520) 0:00:01.525 **** 2025-09-13 00:58:40.102358 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-13 00:58:40.102369 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-13 00:58:40.102380 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-13 00:58:40.102390 | orchestrator | 2025-09-13 00:58:40.102401 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-09-13 00:58:40.102412 | orchestrator | Saturday 13 September 2025 00:55:47 +0000 (0:00:00.673) 0:00:02.198 **** 2025-09-13 00:58:40.102428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-13 00:58:40.102471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-13 00:58:40.102499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-13 00:58:40.102515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-13 00:58:40.102529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-13 00:58:40.102597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-13 00:58:40.102612 | orchestrator | 2025-09-13 00:58:40.102623 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-13 00:58:40.102639 | orchestrator | Saturday 13 September 2025 00:55:49 +0000 (0:00:01.805) 0:00:04.004 **** 2025-09-13 00:58:40.102651 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:58:40.102663 | orchestrator | 2025-09-13 00:58:40.102674 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-09-13 00:58:40.102685 | orchestrator | Saturday 13 September 2025 00:55:49 +0000 (0:00:00.519) 0:00:04.524 **** 2025-09-13 00:58:40.102705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-13 00:58:40.102718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-13 00:58:40.102730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-13 00:58:40.102750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-13 00:58:40.102774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-13 00:58:40.102815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-13 00:58:40.102830 | orchestrator | 2025-09-13 00:58:40.102849 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-09-13 00:58:40.102862 | orchestrator | Saturday 13 September 2025 00:55:52 +0000 (0:00:02.878) 0:00:07.402 **** 2025-09-13 00:58:40.102875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-13 00:58:40.102890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-13 00:58:40.102903 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:58:40.102921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-13 00:58:40.102943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-13 00:58:40.102970 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:58:40.102984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-13 00:58:40.102998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-13 00:58:40.103012 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:58:40.103024 | orchestrator | 2025-09-13 00:58:40.103037 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-09-13 00:58:40.103049 | orchestrator | Saturday 13 September 2025 00:55:54 +0000 (0:00:01.389) 0:00:08.792 **** 2025-09-13 00:58:40.103067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-13 00:58:40.103089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-13 00:58:40.103110 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:58:40.103124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-13 00:58:40.103136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-13 00:58:40.103148 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:58:40.103164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-13 00:58:40.103184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-13 00:58:40.103202 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:58:40.103213 | orchestrator | 2025-09-13 00:58:40.103224 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-09-13 00:58:40.103235 | orchestrator | Saturday 13 September 2025 00:55:55 +0000 (0:00:01.402) 0:00:10.195 **** 2025-09-13 00:58:40.103246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-13 00:58:40.103258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-13 00:58:40.103275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-13 00:58:40.103295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-13 00:58:40.103314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-13 00:58:40.103327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-13 00:58:40.103339 | orchestrator | 2025-09-13 00:58:40.103350 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-09-13 00:58:40.103361 | orchestrator | Saturday 13 September 2025 00:55:57 +0000 (0:00:02.297) 0:00:12.492 **** 2025-09-13 00:58:40.103372 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:58:40.103383 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:58:40.103394 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:58:40.103405 | orchestrator | 2025-09-13 00:58:40.103416 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-09-13 00:58:40.103426 | orchestrator | Saturday 13 September 2025 00:56:00 +0000 (0:00:02.835) 0:00:15.328 **** 2025-09-13 00:58:40.103437 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:58:40.103448 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:58:40.103459 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:58:40.103470 | orchestrator | 2025-09-13 00:58:40.103481 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-09-13 00:58:40.103491 | orchestrator | Saturday 13 September 2025 00:56:02 +0000 (0:00:01.781) 0:00:17.109 **** 2025-09-13 00:58:40.103507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-13 00:58:40.103532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-13 00:58:40.103545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-13 00:58:40.103557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-13 00:58:40.103574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-13 00:58:40.103594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-13 00:58:40.103612 | orchestrator | 2025-09-13 00:58:40.103622 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-13 00:58:40.103633 | orchestrator | Saturday 13 September 2025 00:56:04 +0000 (0:00:02.110) 0:00:19.220 **** 2025-09-13 00:58:40.103644 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:58:40.103655 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:58:40.103666 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:58:40.103677 | orchestrator | 2025-09-13 00:58:40.103688 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-13 00:58:40.103699 | orchestrator | Saturday 13 September 2025 00:56:04 +0000 (0:00:00.305) 0:00:19.526 **** 2025-09-13 00:58:40.103710 | orchestrator | 2025-09-13 00:58:40.103721 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-13 00:58:40.103732 | orchestrator | Saturday 13 September 2025 00:56:04 +0000 (0:00:00.060) 0:00:19.587 **** 2025-09-13 00:58:40.103742 | orchestrator | 2025-09-13 00:58:40.103753 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-13 00:58:40.103764 | orchestrator | Saturday 13 September 2025 00:56:05 +0000 (0:00:00.066) 0:00:19.654 **** 2025-09-13 00:58:40.103775 | orchestrator | 2025-09-13 00:58:40.103786 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-09-13 00:58:40.103811 | orchestrator | Saturday 13 September 2025 00:56:05 +0000 (0:00:00.066) 0:00:19.721 **** 2025-09-13 00:58:40.103822 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:58:40.103833 | orchestrator | 2025-09-13 00:58:40.103844 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-09-13 00:58:40.103855 | orchestrator | Saturday 13 September 2025 00:56:05 +0000 (0:00:00.204) 0:00:19.925 **** 2025-09-13 00:58:40.103866 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:58:40.103877 | orchestrator | 2025-09-13 00:58:40.103888 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-09-13 00:58:40.103899 | orchestrator | Saturday 13 September 2025 00:56:05 +0000 (0:00:00.602) 0:00:20.528 **** 2025-09-13 00:58:40.103910 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:58:40.103921 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:58:40.103932 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:58:40.103942 | orchestrator | 2025-09-13 00:58:40.103953 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-09-13 00:58:40.103964 | orchestrator | Saturday 13 September 2025 00:57:07 +0000 (0:01:01.142) 0:01:21.670 **** 2025-09-13 00:58:40.103975 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:58:40.103986 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:58:40.103997 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:58:40.104008 | orchestrator | 2025-09-13 00:58:40.104018 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-13 00:58:40.104029 | orchestrator | Saturday 13 September 2025 00:58:28 +0000 (0:01:21.722) 0:02:43.392 **** 2025-09-13 00:58:40.104040 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:58:40.104057 | orchestrator | 2025-09-13 00:58:40.104068 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-09-13 00:58:40.104079 | orchestrator | Saturday 13 September 2025 00:58:29 +0000 (0:00:00.471) 0:02:43.864 **** 2025-09-13 00:58:40.104091 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:58:40.104102 | orchestrator | 2025-09-13 00:58:40.104112 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-09-13 00:58:40.104123 | orchestrator | Saturday 13 September 2025 00:58:31 +0000 (0:00:02.553) 0:02:46.417 **** 2025-09-13 00:58:40.104134 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:58:40.104145 | orchestrator | 2025-09-13 00:58:40.104156 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-09-13 00:58:40.104167 | orchestrator | Saturday 13 September 2025 00:58:34 +0000 (0:00:02.331) 0:02:48.748 **** 2025-09-13 00:58:40.104178 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:58:40.104189 | orchestrator | 2025-09-13 00:58:40.104204 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-09-13 00:58:40.104216 | orchestrator | Saturday 13 September 2025 00:58:36 +0000 (0:00:02.697) 0:02:51.445 **** 2025-09-13 00:58:40.104227 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:58:40.104237 | orchestrator | 2025-09-13 00:58:40.104248 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:58:40.104260 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-13 00:58:40.104272 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-13 00:58:40.104283 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-13 00:58:40.104294 | orchestrator | 2025-09-13 00:58:40.104305 | orchestrator | 2025-09-13 00:58:40.104315 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:58:40.104332 | orchestrator | Saturday 13 September 2025 00:58:39 +0000 (0:00:02.433) 0:02:53.879 **** 2025-09-13 00:58:40.104343 | orchestrator | =============================================================================== 2025-09-13 00:58:40.104354 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 81.72s 2025-09-13 00:58:40.104365 | orchestrator | opensearch : Restart opensearch container ------------------------------ 61.14s 2025-09-13 00:58:40.104376 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.88s 2025-09-13 00:58:40.104387 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.84s 2025-09-13 00:58:40.104398 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.70s 2025-09-13 00:58:40.104409 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.55s 2025-09-13 00:58:40.104420 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.43s 2025-09-13 00:58:40.104430 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.33s 2025-09-13 00:58:40.104441 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.30s 2025-09-13 00:58:40.104452 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.11s 2025-09-13 00:58:40.104463 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.81s 2025-09-13 00:58:40.104474 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.78s 2025-09-13 00:58:40.104485 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.40s 2025-09-13 00:58:40.104496 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.39s 2025-09-13 00:58:40.104506 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.67s 2025-09-13 00:58:40.104517 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.60s 2025-09-13 00:58:40.104541 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2025-09-13 00:58:40.104551 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2025-09-13 00:58:40.104562 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.47s 2025-09-13 00:58:40.104573 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.47s 2025-09-13 00:58:40.104584 | orchestrator | 2025-09-13 00:58:40 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:58:40.104595 | orchestrator | 2025-09-13 00:58:40 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:58:43.155703 | orchestrator | 2025-09-13 00:58:43 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:58:43.157542 | orchestrator | 2025-09-13 00:58:43 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:58:43.157953 | orchestrator | 2025-09-13 00:58:43 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:58:46.211100 | orchestrator | 2025-09-13 00:58:46 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:58:46.212881 | orchestrator | 2025-09-13 00:58:46 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:58:46.212912 | orchestrator | 2025-09-13 00:58:46 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:58:49.256646 | orchestrator | 2025-09-13 00:58:49 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:58:49.258680 | orchestrator | 2025-09-13 00:58:49 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:58:49.258718 | orchestrator | 2025-09-13 00:58:49 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:58:52.291721 | orchestrator | 2025-09-13 00:58:52 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state STARTED 2025-09-13 00:58:52.293018 | orchestrator | 2025-09-13 00:58:52 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:58:52.293417 | orchestrator | 2025-09-13 00:58:52 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:58:55.342821 | orchestrator | 2025-09-13 00:58:55.343005 | orchestrator | 2025-09-13 00:58:55.343028 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-09-13 00:58:55.343041 | orchestrator | 2025-09-13 00:58:55.343053 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-13 00:58:55.343064 | orchestrator | Saturday 13 September 2025 00:55:45 +0000 (0:00:00.106) 0:00:00.106 **** 2025-09-13 00:58:55.343076 | orchestrator | ok: [localhost] => { 2025-09-13 00:58:55.343089 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-09-13 00:58:55.343100 | orchestrator | } 2025-09-13 00:58:55.343111 | orchestrator | 2025-09-13 00:58:55.343122 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-09-13 00:58:55.343133 | orchestrator | Saturday 13 September 2025 00:55:45 +0000 (0:00:00.050) 0:00:00.157 **** 2025-09-13 00:58:55.343144 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-09-13 00:58:55.343157 | orchestrator | ...ignoring 2025-09-13 00:58:55.343168 | orchestrator | 2025-09-13 00:58:55.343178 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-09-13 00:58:55.343189 | orchestrator | Saturday 13 September 2025 00:55:48 +0000 (0:00:02.866) 0:00:03.023 **** 2025-09-13 00:58:55.343200 | orchestrator | skipping: [localhost] 2025-09-13 00:58:55.343211 | orchestrator | 2025-09-13 00:58:55.343222 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-09-13 00:58:55.343255 | orchestrator | Saturday 13 September 2025 00:55:48 +0000 (0:00:00.047) 0:00:03.071 **** 2025-09-13 00:58:55.343267 | orchestrator | ok: [localhost] 2025-09-13 00:58:55.343277 | orchestrator | 2025-09-13 00:58:55.343288 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-13 00:58:55.343377 | orchestrator | 2025-09-13 00:58:55.343390 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-13 00:58:55.343401 | orchestrator | Saturday 13 September 2025 00:55:48 +0000 (0:00:00.173) 0:00:03.245 **** 2025-09-13 00:58:55.343412 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:58:55.343423 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:58:55.343434 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:58:55.343445 | orchestrator | 2025-09-13 00:58:55.343455 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-13 00:58:55.343466 | orchestrator | Saturday 13 September 2025 00:55:49 +0000 (0:00:00.336) 0:00:03.582 **** 2025-09-13 00:58:55.343477 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-09-13 00:58:55.343489 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-09-13 00:58:55.343499 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-09-13 00:58:55.343510 | orchestrator | 2025-09-13 00:58:55.343521 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-09-13 00:58:55.343531 | orchestrator | 2025-09-13 00:58:55.343543 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-09-13 00:58:55.343553 | orchestrator | Saturday 13 September 2025 00:55:49 +0000 (0:00:00.525) 0:00:04.108 **** 2025-09-13 00:58:55.343564 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-13 00:58:55.343575 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-13 00:58:55.343586 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-13 00:58:55.343596 | orchestrator | 2025-09-13 00:58:55.343607 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-13 00:58:55.343618 | orchestrator | Saturday 13 September 2025 00:55:49 +0000 (0:00:00.379) 0:00:04.487 **** 2025-09-13 00:58:55.343629 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:58:55.343641 | orchestrator | 2025-09-13 00:58:55.343652 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-09-13 00:58:55.343662 | orchestrator | Saturday 13 September 2025 00:55:50 +0000 (0:00:00.617) 0:00:05.105 **** 2025-09-13 00:58:55.343712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-13 00:58:55.343740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-13 00:58:55.343754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-13 00:58:55.343766 | orchestrator | 2025-09-13 00:58:55.343810 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-09-13 00:58:55.343829 | orchestrator | Saturday 13 September 2025 00:55:54 +0000 (0:00:03.840) 0:00:08.946 **** 2025-09-13 00:58:55.343841 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:58:55.343853 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:58:55.343864 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:58:55.343874 | orchestrator | 2025-09-13 00:58:55.343885 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-09-13 00:58:55.343896 | orchestrator | Saturday 13 September 2025 00:55:55 +0000 (0:00:00.842) 0:00:09.789 **** 2025-09-13 00:58:55.343907 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:58:55.343917 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:58:55.343928 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:58:55.343939 | orchestrator | 2025-09-13 00:58:55.343949 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-09-13 00:58:55.343960 | orchestrator | Saturday 13 September 2025 00:55:56 +0000 (0:00:01.440) 0:00:11.229 **** 2025-09-13 00:58:55.343972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-13 00:58:55.343997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-13 00:58:55.344017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-13 00:58:55.344029 | orchestrator | 2025-09-13 00:58:55.344040 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-09-13 00:58:55.344051 | orchestrator | Saturday 13 September 2025 00:56:00 +0000 (0:00:03.607) 0:00:14.837 **** 2025-09-13 00:58:55.344062 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:58:55.344073 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:58:55.344084 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:58:55.344094 | orchestrator | 2025-09-13 00:58:55.344105 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-09-13 00:58:55.344116 | orchestrator | Saturday 13 September 2025 00:56:01 +0000 (0:00:01.106) 0:00:15.943 **** 2025-09-13 00:58:55.344127 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:58:55.344137 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:58:55.344148 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:58:55.344159 | orchestrator | 2025-09-13 00:58:55.344170 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-13 00:58:55.344181 | orchestrator | Saturday 13 September 2025 00:56:05 +0000 (0:00:04.073) 0:00:20.016 **** 2025-09-13 00:58:55.344192 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:58:55.344203 | orchestrator | 2025-09-13 00:58:55.344213 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-13 00:58:55.344224 | orchestrator | Saturday 13 September 2025 00:56:05 +0000 (0:00:00.508) 0:00:20.525 **** 2025-09-13 00:58:55.344249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-13 00:58:55.344270 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:58:55.344282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-13 00:58:55.344294 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:58:55.344317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-13 00:58:55.344336 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:58:55.344347 | orchestrator | 2025-09-13 00:58:55.344358 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-13 00:58:55.344369 | orchestrator | Saturday 13 September 2025 00:56:09 +0000 (0:00:03.143) 0:00:23.668 **** 2025-09-13 00:58:55.344381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-13 00:58:55.344393 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:58:55.344415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-13 00:58:55.344433 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:58:55.344445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-13 00:58:55.344457 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:58:55.344468 | orchestrator | 2025-09-13 00:58:55.344479 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-13 00:58:55.344490 | orchestrator | Saturday 13 September 2025 00:56:11 +0000 (0:00:02.841) 0:00:26.510 **** 2025-09-13 00:58:55.344501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-13 00:58:55.344520 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:58:55.344551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-13 00:58:55.344563 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:58:55.344575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-13 00:58:55.344594 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:58:55.344605 | orchestrator | 2025-09-13 00:58:55.344616 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-09-13 00:58:55.344627 | orchestrator | Saturday 13 September 2025 00:56:15 +0000 (0:00:03.604) 0:00:30.114 **** 2025-09-13 00:58:55.344652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-13 00:58:55.344666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-13 00:58:55.344698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-13 00:58:55.344711 | orchestrator | 2025-09-13 00:58:55.344723 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-09-13 00:58:55.344734 | orchestrator | Saturday 13 September 2025 00:56:19 +0000 (0:00:03.520) 0:00:33.635 **** 2025-09-13 00:58:55.344745 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:58:55.344755 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:58:55.344766 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:58:55.344777 | orchestrator | 2025-09-13 00:58:55.344803 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-09-13 00:58:55.344814 | orchestrator | Saturday 13 September 2025 00:56:20 +0000 (0:00:00.956) 0:00:34.592 **** 2025-09-13 00:58:55.344825 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:58:55.344835 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:58:55.344846 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:58:55.344856 | orchestrator | 2025-09-13 00:58:55.344867 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-09-13 00:58:55.344878 | orchestrator | Saturday 13 September 2025 00:56:20 +0000 (0:00:00.811) 0:00:35.403 **** 2025-09-13 00:58:55.344895 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:58:55.344906 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:58:55.344917 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:58:55.344927 | orchestrator | 2025-09-13 00:58:55.344938 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-09-13 00:58:55.344949 | orchestrator | Saturday 13 September 2025 00:56:21 +0000 (0:00:00.513) 0:00:35.917 **** 2025-09-13 00:58:55.344961 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-09-13 00:58:55.344971 | orchestrator | ...ignoring 2025-09-13 00:58:55.344982 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-09-13 00:58:55.344993 | orchestrator | ...ignoring 2025-09-13 00:58:55.345004 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-09-13 00:58:55.345015 | orchestrator | ...ignoring 2025-09-13 00:58:55.345026 | orchestrator | 2025-09-13 00:58:55.345037 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-09-13 00:58:55.345047 | orchestrator | Saturday 13 September 2025 00:56:32 +0000 (0:00:10.994) 0:00:46.911 **** 2025-09-13 00:58:55.345058 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:58:55.345068 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:58:55.345079 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:58:55.345089 | orchestrator | 2025-09-13 00:58:55.345100 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-09-13 00:58:55.345111 | orchestrator | Saturday 13 September 2025 00:56:32 +0000 (0:00:00.371) 0:00:47.283 **** 2025-09-13 00:58:55.345121 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:58:55.345132 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:58:55.345143 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:58:55.345153 | orchestrator | 2025-09-13 00:58:55.345164 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-09-13 00:58:55.345175 | orchestrator | Saturday 13 September 2025 00:56:33 +0000 (0:00:00.518) 0:00:47.801 **** 2025-09-13 00:58:55.345186 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:58:55.345196 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:58:55.345207 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:58:55.345218 | orchestrator | 2025-09-13 00:58:55.345228 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-09-13 00:58:55.345239 | orchestrator | Saturday 13 September 2025 00:56:33 +0000 (0:00:00.353) 0:00:48.155 **** 2025-09-13 00:58:55.345250 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:58:55.345261 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:58:55.345271 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:58:55.345282 | orchestrator | 2025-09-13 00:58:55.345292 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-09-13 00:58:55.345303 | orchestrator | Saturday 13 September 2025 00:56:34 +0000 (0:00:00.390) 0:00:48.545 **** 2025-09-13 00:58:55.345314 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:58:55.345325 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:58:55.345336 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:58:55.345346 | orchestrator | 2025-09-13 00:58:55.345357 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-09-13 00:58:55.345368 | orchestrator | Saturday 13 September 2025 00:56:34 +0000 (0:00:00.376) 0:00:48.922 **** 2025-09-13 00:58:55.345389 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:58:55.345401 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:58:55.345412 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:58:55.345422 | orchestrator | 2025-09-13 00:58:55.345433 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-13 00:58:55.345443 | orchestrator | Saturday 13 September 2025 00:56:34 +0000 (0:00:00.521) 0:00:49.443 **** 2025-09-13 00:58:55.345454 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:58:55.345470 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:58:55.345480 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-09-13 00:58:55.345491 | orchestrator | 2025-09-13 00:58:55.345502 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-09-13 00:58:55.345512 | orchestrator | Saturday 13 September 2025 00:56:35 +0000 (0:00:00.337) 0:00:49.781 **** 2025-09-13 00:58:55.345523 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:58:55.345533 | orchestrator | 2025-09-13 00:58:55.345544 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-09-13 00:58:55.345555 | orchestrator | Saturday 13 September 2025 00:56:45 +0000 (0:00:10.113) 0:00:59.894 **** 2025-09-13 00:58:55.345565 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:58:55.345576 | orchestrator | 2025-09-13 00:58:55.345586 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-13 00:58:55.345597 | orchestrator | Saturday 13 September 2025 00:56:45 +0000 (0:00:00.110) 0:01:00.005 **** 2025-09-13 00:58:55.345608 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:58:55.345618 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:58:55.345629 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:58:55.345639 | orchestrator | 2025-09-13 00:58:55.345650 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-09-13 00:58:55.345661 | orchestrator | Saturday 13 September 2025 00:56:46 +0000 (0:00:00.992) 0:01:00.997 **** 2025-09-13 00:58:55.345672 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:58:55.345682 | orchestrator | 2025-09-13 00:58:55.345693 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-09-13 00:58:55.345703 | orchestrator | Saturday 13 September 2025 00:56:54 +0000 (0:00:07.731) 0:01:08.729 **** 2025-09-13 00:58:55.345714 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:58:55.345724 | orchestrator | 2025-09-13 00:58:55.345735 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-09-13 00:58:55.345746 | orchestrator | Saturday 13 September 2025 00:56:55 +0000 (0:00:01.709) 0:01:10.438 **** 2025-09-13 00:58:55.345756 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:58:55.345767 | orchestrator | 2025-09-13 00:58:55.345795 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-09-13 00:58:55.345807 | orchestrator | Saturday 13 September 2025 00:56:58 +0000 (0:00:02.528) 0:01:12.966 **** 2025-09-13 00:58:55.345817 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:58:55.345828 | orchestrator | 2025-09-13 00:58:55.345839 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-09-13 00:58:55.345850 | orchestrator | Saturday 13 September 2025 00:56:58 +0000 (0:00:00.127) 0:01:13.093 **** 2025-09-13 00:58:55.345860 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:58:55.345871 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:58:55.345881 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:58:55.345892 | orchestrator | 2025-09-13 00:58:55.345903 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-09-13 00:58:55.345913 | orchestrator | Saturday 13 September 2025 00:56:58 +0000 (0:00:00.305) 0:01:13.399 **** 2025-09-13 00:58:55.345924 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:58:55.345934 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-09-13 00:58:55.345945 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:58:55.345956 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:58:55.345966 | orchestrator | 2025-09-13 00:58:55.345977 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-09-13 00:58:55.345988 | orchestrator | skipping: no hosts matched 2025-09-13 00:58:55.345998 | orchestrator | 2025-09-13 00:58:55.346009 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-13 00:58:55.346055 | orchestrator | 2025-09-13 00:58:55.346067 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-13 00:58:55.346078 | orchestrator | Saturday 13 September 2025 00:56:59 +0000 (0:00:00.535) 0:01:13.934 **** 2025-09-13 00:58:55.346095 | orchestrator | changed: [testbed-node-1] 2025-09-13 00:58:55.346106 | orchestrator | 2025-09-13 00:58:55.346117 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-13 00:58:55.346127 | orchestrator | Saturday 13 September 2025 00:57:17 +0000 (0:00:18.394) 0:01:32.329 **** 2025-09-13 00:58:55.346138 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:58:55.346148 | orchestrator | 2025-09-13 00:58:55.346159 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-13 00:58:55.346170 | orchestrator | Saturday 13 September 2025 00:57:38 +0000 (0:00:20.594) 0:01:52.923 **** 2025-09-13 00:58:55.346180 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:58:55.346191 | orchestrator | 2025-09-13 00:58:55.346202 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-13 00:58:55.346212 | orchestrator | 2025-09-13 00:58:55.346223 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-13 00:58:55.346233 | orchestrator | Saturday 13 September 2025 00:57:40 +0000 (0:00:02.420) 0:01:55.344 **** 2025-09-13 00:58:55.346244 | orchestrator | changed: [testbed-node-2] 2025-09-13 00:58:55.346255 | orchestrator | 2025-09-13 00:58:55.346265 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-13 00:58:55.346276 | orchestrator | Saturday 13 September 2025 00:58:05 +0000 (0:00:24.353) 0:02:19.697 **** 2025-09-13 00:58:55.346286 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:58:55.346297 | orchestrator | 2025-09-13 00:58:55.346308 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-13 00:58:55.346319 | orchestrator | Saturday 13 September 2025 00:58:20 +0000 (0:00:15.662) 0:02:35.360 **** 2025-09-13 00:58:55.346329 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:58:55.346340 | orchestrator | 2025-09-13 00:58:55.346351 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-09-13 00:58:55.346361 | orchestrator | 2025-09-13 00:58:55.346383 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-13 00:58:55.346395 | orchestrator | Saturday 13 September 2025 00:58:23 +0000 (0:00:02.456) 0:02:37.817 **** 2025-09-13 00:58:55.346406 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:58:55.346416 | orchestrator | 2025-09-13 00:58:55.346427 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-13 00:58:55.346438 | orchestrator | Saturday 13 September 2025 00:58:34 +0000 (0:00:11.168) 0:02:48.985 **** 2025-09-13 00:58:55.346449 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:58:55.346459 | orchestrator | 2025-09-13 00:58:55.346470 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-13 00:58:55.346480 | orchestrator | Saturday 13 September 2025 00:58:38 +0000 (0:00:04.547) 0:02:53.533 **** 2025-09-13 00:58:55.346491 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:58:55.346502 | orchestrator | 2025-09-13 00:58:55.346513 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-09-13 00:58:55.346523 | orchestrator | 2025-09-13 00:58:55.346534 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-09-13 00:58:55.346544 | orchestrator | Saturday 13 September 2025 00:58:41 +0000 (0:00:02.767) 0:02:56.300 **** 2025-09-13 00:58:55.346555 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 00:58:55.346566 | orchestrator | 2025-09-13 00:58:55.346577 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-09-13 00:58:55.346587 | orchestrator | Saturday 13 September 2025 00:58:42 +0000 (0:00:00.544) 0:02:56.845 **** 2025-09-13 00:58:55.346598 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:58:55.346609 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:58:55.346619 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:58:55.346630 | orchestrator | 2025-09-13 00:58:55.346640 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-09-13 00:58:55.346651 | orchestrator | Saturday 13 September 2025 00:58:44 +0000 (0:00:02.274) 0:02:59.119 **** 2025-09-13 00:58:55.346669 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:58:55.346679 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:58:55.346690 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:58:55.346701 | orchestrator | 2025-09-13 00:58:55.346711 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-09-13 00:58:55.346722 | orchestrator | Saturday 13 September 2025 00:58:46 +0000 (0:00:02.281) 0:03:01.401 **** 2025-09-13 00:58:55.346732 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:58:55.346743 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:58:55.346754 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:58:55.346764 | orchestrator | 2025-09-13 00:58:55.346775 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-09-13 00:58:55.346814 | orchestrator | Saturday 13 September 2025 00:58:48 +0000 (0:00:02.090) 0:03:03.491 **** 2025-09-13 00:58:55.346825 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:58:55.346836 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:58:55.346847 | orchestrator | changed: [testbed-node-0] 2025-09-13 00:58:55.346857 | orchestrator | 2025-09-13 00:58:55.346868 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-09-13 00:58:55.346878 | orchestrator | Saturday 13 September 2025 00:58:51 +0000 (0:00:02.058) 0:03:05.549 **** 2025-09-13 00:58:55.346889 | orchestrator | ok: [testbed-node-0] 2025-09-13 00:58:55.346900 | orchestrator | ok: [testbed-node-1] 2025-09-13 00:58:55.346911 | orchestrator | ok: [testbed-node-2] 2025-09-13 00:58:55.346921 | orchestrator | 2025-09-13 00:58:55.346932 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-09-13 00:58:55.346942 | orchestrator | Saturday 13 September 2025 00:58:53 +0000 (0:00:02.704) 0:03:08.254 **** 2025-09-13 00:58:55.346953 | orchestrator | skipping: [testbed-node-0] 2025-09-13 00:58:55.346964 | orchestrator | skipping: [testbed-node-1] 2025-09-13 00:58:55.346974 | orchestrator | skipping: [testbed-node-2] 2025-09-13 00:58:55.346985 | orchestrator | 2025-09-13 00:58:55.346995 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:58:55.347006 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-13 00:58:55.347017 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-09-13 00:58:55.347030 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-13 00:58:55.347041 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-13 00:58:55.347051 | orchestrator | 2025-09-13 00:58:55.347062 | orchestrator | 2025-09-13 00:58:55.347072 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:58:55.347083 | orchestrator | Saturday 13 September 2025 00:58:54 +0000 (0:00:00.460) 0:03:08.715 **** 2025-09-13 00:58:55.347094 | orchestrator | =============================================================================== 2025-09-13 00:58:55.347104 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 42.75s 2025-09-13 00:58:55.347115 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 36.26s 2025-09-13 00:58:55.347126 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.17s 2025-09-13 00:58:55.347136 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.99s 2025-09-13 00:58:55.347147 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.11s 2025-09-13 00:58:55.347158 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.73s 2025-09-13 00:58:55.347179 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.88s 2025-09-13 00:58:55.347198 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.55s 2025-09-13 00:58:55.347209 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.07s 2025-09-13 00:58:55.347220 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.84s 2025-09-13 00:58:55.347230 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.61s 2025-09-13 00:58:55.347241 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.60s 2025-09-13 00:58:55.347252 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.52s 2025-09-13 00:58:55.347263 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.14s 2025-09-13 00:58:55.347273 | orchestrator | Check MariaDB service --------------------------------------------------- 2.87s 2025-09-13 00:58:55.347284 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.84s 2025-09-13 00:58:55.347295 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.77s 2025-09-13 00:58:55.347306 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.70s 2025-09-13 00:58:55.347316 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.53s 2025-09-13 00:58:55.347327 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.28s 2025-09-13 00:58:55.347337 | orchestrator | 2025-09-13 00:58:55 | INFO  | Task e5b65605-2759-4d22-b24c-1a6e872a3456 is in state SUCCESS 2025-09-13 00:58:55.347349 | orchestrator | 2025-09-13 00:58:55 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:58:55.347359 | orchestrator | 2025-09-13 00:58:55 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:58:58.381917 | orchestrator | 2025-09-13 00:58:58 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 00:58:58.384147 | orchestrator | 2025-09-13 00:58:58 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:58:58.385390 | orchestrator | 2025-09-13 00:58:58 | INFO  | Task 78c68b84-6553-4856-a653-cdb803c2f8bf is in state STARTED 2025-09-13 00:58:58.385593 | orchestrator | 2025-09-13 00:58:58 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:59:01.430440 | orchestrator | 2025-09-13 00:59:01 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 00:59:01.431894 | orchestrator | 2025-09-13 00:59:01 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:59:01.432095 | orchestrator | 2025-09-13 00:59:01 | INFO  | Task 78c68b84-6553-4856-a653-cdb803c2f8bf is in state STARTED 2025-09-13 00:59:01.432120 | orchestrator | 2025-09-13 00:59:01 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:59:04.471734 | orchestrator | 2025-09-13 00:59:04 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 00:59:04.471876 | orchestrator | 2025-09-13 00:59:04 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:59:04.472396 | orchestrator | 2025-09-13 00:59:04 | INFO  | Task 78c68b84-6553-4856-a653-cdb803c2f8bf is in state STARTED 2025-09-13 00:59:04.472428 | orchestrator | 2025-09-13 00:59:04 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:59:07.511169 | orchestrator | 2025-09-13 00:59:07 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 00:59:07.512333 | orchestrator | 2025-09-13 00:59:07 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:59:07.513549 | orchestrator | 2025-09-13 00:59:07 | INFO  | Task 78c68b84-6553-4856-a653-cdb803c2f8bf is in state STARTED 2025-09-13 00:59:07.515513 | orchestrator | 2025-09-13 00:59:07 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:59:10.554487 | orchestrator | 2025-09-13 00:59:10 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 00:59:10.557159 | orchestrator | 2025-09-13 00:59:10 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:59:10.560298 | orchestrator | 2025-09-13 00:59:10 | INFO  | Task 78c68b84-6553-4856-a653-cdb803c2f8bf is in state STARTED 2025-09-13 00:59:10.561357 | orchestrator | 2025-09-13 00:59:10 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:59:13.595534 | orchestrator | 2025-09-13 00:59:13 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 00:59:13.598791 | orchestrator | 2025-09-13 00:59:13 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:59:13.601700 | orchestrator | 2025-09-13 00:59:13 | INFO  | Task 78c68b84-6553-4856-a653-cdb803c2f8bf is in state STARTED 2025-09-13 00:59:13.601853 | orchestrator | 2025-09-13 00:59:13 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:59:16.635931 | orchestrator | 2025-09-13 00:59:16 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 00:59:16.636883 | orchestrator | 2025-09-13 00:59:16 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:59:16.638248 | orchestrator | 2025-09-13 00:59:16 | INFO  | Task 78c68b84-6553-4856-a653-cdb803c2f8bf is in state STARTED 2025-09-13 00:59:16.638379 | orchestrator | 2025-09-13 00:59:16 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:59:19.685121 | orchestrator | 2025-09-13 00:59:19 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 00:59:19.685798 | orchestrator | 2025-09-13 00:59:19 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:59:19.686682 | orchestrator | 2025-09-13 00:59:19 | INFO  | Task 78c68b84-6553-4856-a653-cdb803c2f8bf is in state STARTED 2025-09-13 00:59:19.686708 | orchestrator | 2025-09-13 00:59:19 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:59:22.714951 | orchestrator | 2025-09-13 00:59:22 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 00:59:22.718658 | orchestrator | 2025-09-13 00:59:22 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:59:22.720061 | orchestrator | 2025-09-13 00:59:22 | INFO  | Task 78c68b84-6553-4856-a653-cdb803c2f8bf is in state STARTED 2025-09-13 00:59:22.720333 | orchestrator | 2025-09-13 00:59:22 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:59:25.760033 | orchestrator | 2025-09-13 00:59:25 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 00:59:25.761214 | orchestrator | 2025-09-13 00:59:25 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:59:25.762843 | orchestrator | 2025-09-13 00:59:25 | INFO  | Task 78c68b84-6553-4856-a653-cdb803c2f8bf is in state STARTED 2025-09-13 00:59:25.762883 | orchestrator | 2025-09-13 00:59:25 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:59:28.802849 | orchestrator | 2025-09-13 00:59:28 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 00:59:28.804602 | orchestrator | 2025-09-13 00:59:28 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:59:28.806416 | orchestrator | 2025-09-13 00:59:28 | INFO  | Task 78c68b84-6553-4856-a653-cdb803c2f8bf is in state STARTED 2025-09-13 00:59:28.806513 | orchestrator | 2025-09-13 00:59:28 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:59:31.849594 | orchestrator | 2025-09-13 00:59:31 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 00:59:31.850640 | orchestrator | 2025-09-13 00:59:31 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:59:31.852590 | orchestrator | 2025-09-13 00:59:31 | INFO  | Task 78c68b84-6553-4856-a653-cdb803c2f8bf is in state STARTED 2025-09-13 00:59:31.852612 | orchestrator | 2025-09-13 00:59:31 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:59:34.894860 | orchestrator | 2025-09-13 00:59:34 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 00:59:34.895159 | orchestrator | 2025-09-13 00:59:34 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:59:34.895991 | orchestrator | 2025-09-13 00:59:34 | INFO  | Task 78c68b84-6553-4856-a653-cdb803c2f8bf is in state STARTED 2025-09-13 00:59:34.896130 | orchestrator | 2025-09-13 00:59:34 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:59:37.934070 | orchestrator | 2025-09-13 00:59:37 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 00:59:37.935079 | orchestrator | 2025-09-13 00:59:37 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:59:37.936534 | orchestrator | 2025-09-13 00:59:37 | INFO  | Task 78c68b84-6553-4856-a653-cdb803c2f8bf is in state STARTED 2025-09-13 00:59:37.936633 | orchestrator | 2025-09-13 00:59:37 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:59:40.985886 | orchestrator | 2025-09-13 00:59:40 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 00:59:40.987294 | orchestrator | 2025-09-13 00:59:40 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:59:40.989644 | orchestrator | 2025-09-13 00:59:40 | INFO  | Task 78c68b84-6553-4856-a653-cdb803c2f8bf is in state STARTED 2025-09-13 00:59:40.989901 | orchestrator | 2025-09-13 00:59:40 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:59:44.039329 | orchestrator | 2025-09-13 00:59:44 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 00:59:44.039532 | orchestrator | 2025-09-13 00:59:44 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:59:44.040439 | orchestrator | 2025-09-13 00:59:44 | INFO  | Task 78c68b84-6553-4856-a653-cdb803c2f8bf is in state STARTED 2025-09-13 00:59:44.040464 | orchestrator | 2025-09-13 00:59:44 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:59:47.079136 | orchestrator | 2025-09-13 00:59:47 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 00:59:47.080428 | orchestrator | 2025-09-13 00:59:47 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:59:47.082348 | orchestrator | 2025-09-13 00:59:47 | INFO  | Task 78c68b84-6553-4856-a653-cdb803c2f8bf is in state STARTED 2025-09-13 00:59:47.082375 | orchestrator | 2025-09-13 00:59:47 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:59:50.128596 | orchestrator | 2025-09-13 00:59:50 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 00:59:50.129443 | orchestrator | 2025-09-13 00:59:50 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:59:50.131819 | orchestrator | 2025-09-13 00:59:50 | INFO  | Task 78c68b84-6553-4856-a653-cdb803c2f8bf is in state STARTED 2025-09-13 00:59:50.131842 | orchestrator | 2025-09-13 00:59:50 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:59:53.167400 | orchestrator | 2025-09-13 00:59:53 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 00:59:53.168157 | orchestrator | 2025-09-13 00:59:53 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state STARTED 2025-09-13 00:59:53.169246 | orchestrator | 2025-09-13 00:59:53 | INFO  | Task 78c68b84-6553-4856-a653-cdb803c2f8bf is in state STARTED 2025-09-13 00:59:53.169270 | orchestrator | 2025-09-13 00:59:53 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:59:56.212233 | orchestrator | 2025-09-13 00:59:56 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 00:59:56.212732 | orchestrator | 2025-09-13 00:59:56 | INFO  | Task 83b70597-e4e6-441d-8346-75bc75e18ff7 is in state STARTED 2025-09-13 00:59:56.218740 | orchestrator | 2025-09-13 00:59:56.218850 | orchestrator | 2025-09-13 00:59:56.218864 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-09-13 00:59:56.218876 | orchestrator | 2025-09-13 00:59:56.218886 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-13 00:59:56.218896 | orchestrator | Saturday 13 September 2025 00:57:47 +0000 (0:00:00.617) 0:00:00.617 **** 2025-09-13 00:59:56.218907 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:59:56.218917 | orchestrator | 2025-09-13 00:59:56.218946 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-13 00:59:56.218957 | orchestrator | Saturday 13 September 2025 00:57:48 +0000 (0:00:00.600) 0:00:01.217 **** 2025-09-13 00:59:56.218967 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:59:56.218977 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:59:56.218987 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:59:56.218997 | orchestrator | 2025-09-13 00:59:56.219006 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-13 00:59:56.219016 | orchestrator | Saturday 13 September 2025 00:57:48 +0000 (0:00:00.614) 0:00:01.831 **** 2025-09-13 00:59:56.219026 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:59:56.219035 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:59:56.219045 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:59:56.219054 | orchestrator | 2025-09-13 00:59:56.219064 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-13 00:59:56.219074 | orchestrator | Saturday 13 September 2025 00:57:49 +0000 (0:00:00.242) 0:00:02.074 **** 2025-09-13 00:59:56.219083 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:59:56.219093 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:59:56.219102 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:59:56.219112 | orchestrator | 2025-09-13 00:59:56.219121 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-13 00:59:56.219131 | orchestrator | Saturday 13 September 2025 00:57:49 +0000 (0:00:00.669) 0:00:02.743 **** 2025-09-13 00:59:56.219141 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:59:56.219150 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:59:56.219160 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:59:56.219169 | orchestrator | 2025-09-13 00:59:56.219179 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-13 00:59:56.219189 | orchestrator | Saturday 13 September 2025 00:57:49 +0000 (0:00:00.243) 0:00:02.987 **** 2025-09-13 00:59:56.219211 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:59:56.219232 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:59:56.219242 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:59:56.219251 | orchestrator | 2025-09-13 00:59:56.219261 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-13 00:59:56.219271 | orchestrator | Saturday 13 September 2025 00:57:50 +0000 (0:00:00.282) 0:00:03.270 **** 2025-09-13 00:59:56.219282 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:59:56.219293 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:59:56.219305 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:59:56.219315 | orchestrator | 2025-09-13 00:59:56.219342 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-13 00:59:56.219353 | orchestrator | Saturday 13 September 2025 00:57:50 +0000 (0:00:00.253) 0:00:03.523 **** 2025-09-13 00:59:56.219386 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:59:56.219398 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:59:56.219409 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:59:56.219420 | orchestrator | 2025-09-13 00:59:56.219431 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-13 00:59:56.219441 | orchestrator | Saturday 13 September 2025 00:57:50 +0000 (0:00:00.355) 0:00:03.879 **** 2025-09-13 00:59:56.219452 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:59:56.219462 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:59:56.219473 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:59:56.219484 | orchestrator | 2025-09-13 00:59:56.219494 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-13 00:59:56.219505 | orchestrator | Saturday 13 September 2025 00:57:51 +0000 (0:00:00.225) 0:00:04.104 **** 2025-09-13 00:59:56.219516 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-13 00:59:56.219526 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-13 00:59:56.219537 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-13 00:59:56.219548 | orchestrator | 2025-09-13 00:59:56.219558 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-13 00:59:56.219569 | orchestrator | Saturday 13 September 2025 00:57:51 +0000 (0:00:00.598) 0:00:04.702 **** 2025-09-13 00:59:56.219580 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:59:56.219590 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:59:56.219601 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:59:56.219612 | orchestrator | 2025-09-13 00:59:56.219623 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-13 00:59:56.219634 | orchestrator | Saturday 13 September 2025 00:57:52 +0000 (0:00:00.355) 0:00:05.058 **** 2025-09-13 00:59:56.219644 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-13 00:59:56.219653 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-13 00:59:56.219663 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-13 00:59:56.219672 | orchestrator | 2025-09-13 00:59:56.219682 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-13 00:59:56.219691 | orchestrator | Saturday 13 September 2025 00:57:54 +0000 (0:00:02.008) 0:00:07.067 **** 2025-09-13 00:59:56.219701 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-13 00:59:56.219711 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-13 00:59:56.219722 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-13 00:59:56.219731 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:59:56.219741 | orchestrator | 2025-09-13 00:59:56.219751 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-13 00:59:56.219823 | orchestrator | Saturday 13 September 2025 00:57:54 +0000 (0:00:00.427) 0:00:07.494 **** 2025-09-13 00:59:56.219838 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.219851 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.219861 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.219871 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:59:56.219881 | orchestrator | 2025-09-13 00:59:56.219898 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-13 00:59:56.219908 | orchestrator | Saturday 13 September 2025 00:57:55 +0000 (0:00:00.778) 0:00:08.273 **** 2025-09-13 00:59:56.219919 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.219931 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.219948 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.219958 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:59:56.219968 | orchestrator | 2025-09-13 00:59:56.219978 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-13 00:59:56.219988 | orchestrator | Saturday 13 September 2025 00:57:55 +0000 (0:00:00.153) 0:00:08.426 **** 2025-09-13 00:59:56.220000 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'bcef005a81eb', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-13 00:57:52.631871', 'end': '2025-09-13 00:57:52.682581', 'delta': '0:00:00.050710', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['bcef005a81eb'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-09-13 00:59:56.220013 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '2bb5b04eb9d2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-13 00:57:53.304506', 'end': '2025-09-13 00:57:53.361502', 'delta': '0:00:00.056996', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2bb5b04eb9d2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-09-13 00:59:56.220053 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'fdda0415d4f4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-13 00:57:53.846638', 'end': '2025-09-13 00:57:53.889904', 'delta': '0:00:00.043266', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fdda0415d4f4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-09-13 00:59:56.220072 | orchestrator | 2025-09-13 00:59:56.220082 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-13 00:59:56.220092 | orchestrator | Saturday 13 September 2025 00:57:55 +0000 (0:00:00.375) 0:00:08.802 **** 2025-09-13 00:59:56.220102 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:59:56.220112 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:59:56.220121 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:59:56.220131 | orchestrator | 2025-09-13 00:59:56.220141 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-13 00:59:56.220151 | orchestrator | Saturday 13 September 2025 00:57:56 +0000 (0:00:00.416) 0:00:09.219 **** 2025-09-13 00:59:56.220160 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-09-13 00:59:56.220170 | orchestrator | 2025-09-13 00:59:56.220180 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-13 00:59:56.220189 | orchestrator | Saturday 13 September 2025 00:57:57 +0000 (0:00:01.704) 0:00:10.923 **** 2025-09-13 00:59:56.220199 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:59:56.220209 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:59:56.220218 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:59:56.220228 | orchestrator | 2025-09-13 00:59:56.220238 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-13 00:59:56.220247 | orchestrator | Saturday 13 September 2025 00:57:58 +0000 (0:00:00.284) 0:00:11.208 **** 2025-09-13 00:59:56.220257 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:59:56.220267 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:59:56.220276 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:59:56.220286 | orchestrator | 2025-09-13 00:59:56.220295 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-13 00:59:56.220305 | orchestrator | Saturday 13 September 2025 00:57:58 +0000 (0:00:00.405) 0:00:11.614 **** 2025-09-13 00:59:56.220315 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:59:56.220324 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:59:56.220334 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:59:56.220344 | orchestrator | 2025-09-13 00:59:56.220367 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-13 00:59:56.220377 | orchestrator | Saturday 13 September 2025 00:57:59 +0000 (0:00:00.468) 0:00:12.082 **** 2025-09-13 00:59:56.220387 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:59:56.220397 | orchestrator | 2025-09-13 00:59:56.220407 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-13 00:59:56.220416 | orchestrator | Saturday 13 September 2025 00:57:59 +0000 (0:00:00.121) 0:00:12.204 **** 2025-09-13 00:59:56.220426 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:59:56.220435 | orchestrator | 2025-09-13 00:59:56.220445 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-13 00:59:56.220455 | orchestrator | Saturday 13 September 2025 00:57:59 +0000 (0:00:00.218) 0:00:12.422 **** 2025-09-13 00:59:56.220465 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:59:56.220474 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:59:56.220484 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:59:56.220494 | orchestrator | 2025-09-13 00:59:56.220503 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-13 00:59:56.220513 | orchestrator | Saturday 13 September 2025 00:57:59 +0000 (0:00:00.292) 0:00:12.715 **** 2025-09-13 00:59:56.220522 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:59:56.220532 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:59:56.220542 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:59:56.220551 | orchestrator | 2025-09-13 00:59:56.220561 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-13 00:59:56.220571 | orchestrator | Saturday 13 September 2025 00:58:00 +0000 (0:00:00.343) 0:00:13.058 **** 2025-09-13 00:59:56.220580 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:59:56.220596 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:59:56.220605 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:59:56.220615 | orchestrator | 2025-09-13 00:59:56.220625 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-13 00:59:56.220634 | orchestrator | Saturday 13 September 2025 00:58:00 +0000 (0:00:00.552) 0:00:13.610 **** 2025-09-13 00:59:56.220644 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:59:56.220653 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:59:56.220663 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:59:56.220672 | orchestrator | 2025-09-13 00:59:56.220682 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-13 00:59:56.220692 | orchestrator | Saturday 13 September 2025 00:58:00 +0000 (0:00:00.367) 0:00:13.977 **** 2025-09-13 00:59:56.220702 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:59:56.220711 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:59:56.220721 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:59:56.220731 | orchestrator | 2025-09-13 00:59:56.220740 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-13 00:59:56.220750 | orchestrator | Saturday 13 September 2025 00:58:01 +0000 (0:00:00.308) 0:00:14.286 **** 2025-09-13 00:59:56.220777 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:59:56.220787 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:59:56.220797 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:59:56.220806 | orchestrator | 2025-09-13 00:59:56.220816 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-13 00:59:56.220855 | orchestrator | Saturday 13 September 2025 00:58:01 +0000 (0:00:00.327) 0:00:14.614 **** 2025-09-13 00:59:56.220866 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:59:56.220876 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:59:56.220885 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:59:56.220895 | orchestrator | 2025-09-13 00:59:56.220905 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-13 00:59:56.220914 | orchestrator | Saturday 13 September 2025 00:58:02 +0000 (0:00:00.493) 0:00:15.107 **** 2025-09-13 00:59:56.220925 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--741132e6--4e77--5ad5--aab1--a12c98657a1e-osd--block--741132e6--4e77--5ad5--aab1--a12c98657a1e', 'dm-uuid-LVM-jJI5DDIpu0EItbMCyD70C1YVS3RuFgkIDzpp3s6Tq8hGjqWaSBzuz7Maducd3XlY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-13 00:59:56.220936 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c9c3f5f4--a401--5886--82fa--33c7ca08590f-osd--block--c9c3f5f4--a401--5886--82fa--33c7ca08590f', 'dm-uuid-LVM-VOGzGOt7N2MJGxjnyWXZl4x2rYodV1SMq74bPzX15UowmcKMO670XD4LKiQ0PgHi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-13 00:59:56.220952 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:59:56.220963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:59:56.220980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:59:56.220990 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:59:56.221000 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:59:56.221037 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:59:56.221049 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:59:56.221059 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:59:56.221077 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d', 'scsi-SQEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d-part1', 'scsi-SQEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d-part14', 'scsi-SQEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d-part15', 'scsi-SQEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d-part16', 'scsi-SQEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-13 00:59:56.221100 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--741132e6--4e77--5ad5--aab1--a12c98657a1e-osd--block--741132e6--4e77--5ad5--aab1--a12c98657a1e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9R2x89-CdLU-slGE-dXAv-GI8t-5WOV-d3W3gk', 'scsi-0QEMU_QEMU_HARDDISK_6e724704-b413-40a8-af93-f723a1c0b62f', 'scsi-SQEMU_QEMU_HARDDISK_6e724704-b413-40a8-af93-f723a1c0b62f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-13 00:59:56.221139 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c9c3f5f4--a401--5886--82fa--33c7ca08590f-osd--block--c9c3f5f4--a401--5886--82fa--33c7ca08590f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5RJTwx-hdJX-JGy3-lgIp-xI98-Pe6c-cJpC7g', 'scsi-0QEMU_QEMU_HARDDISK_e25c372e-2cb9-47f6-a0c5-1defd25ac71c', 'scsi-SQEMU_QEMU_HARDDISK_e25c372e-2cb9-47f6-a0c5-1defd25ac71c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-13 00:59:56.221151 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b9d4bd55--4398--5073--b181--64dcd216e500-osd--block--b9d4bd55--4398--5073--b181--64dcd216e500', 'dm-uuid-LVM-1qtX0Jo6rJTSVRgewMZPqsBZ847hNk4286JrwLgbQ49IsWeBUP1OvwJlrIeXA7Ip'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-13 00:59:56.221161 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c46d17e-adbc-49dd-8bd7-8befc745e964', 'scsi-SQEMU_QEMU_HARDDISK_0c46d17e-adbc-49dd-8bd7-8befc745e964'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-13 00:59:56.221177 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b087737a--96b5--5170--ab1c--c312068a0bca-osd--block--b087737a--96b5--5170--ab1c--c312068a0bca', 'dm-uuid-LVM-4ziM4yN1AYFDpeajDs2cz5TdsO0MUbtaO80VVHJZIgmxFhRQYU2eEdcQX9mZa8AX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-13 00:59:56.221195 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-13-00-02-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-13 00:59:56.221205 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:59:56.221215 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:59:56.221225 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:59:56.221261 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:59:56.221273 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:59:56.221283 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:59:56.221293 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:59:56.221303 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:59:56.221323 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:59:56.221364 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8', 'scsi-SQEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-13 00:59:56.221377 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4283f495--c022--53d0--a3fe--4c36d70cad8f-osd--block--4283f495--c022--53d0--a3fe--4c36d70cad8f', 'dm-uuid-LVM-kp4Dl7y4fqNhI87RlNBmLiywBxnqlzkWBTHJoQKzLiT8HpT3RobzW8ESzX9IOpdT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-13 00:59:56.221388 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b9d4bd55--4398--5073--b181--64dcd216e500-osd--block--b9d4bd55--4398--5073--b181--64dcd216e500'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wgTo8x-jTlS-kgz9-0kDJ-DbDW-rvkH-juQByI', 'scsi-0QEMU_QEMU_HARDDISK_e924364d-2e91-46ce-bd4b-cca5d229d1e6', 'scsi-SQEMU_QEMU_HARDDISK_e924364d-2e91-46ce-bd4b-cca5d229d1e6'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-13 00:59:56.221409 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7339ba9f--b6a9--52d7--bde1--e21ae438ff7a-osd--block--7339ba9f--b6a9--52d7--bde1--e21ae438ff7a', 'dm-uuid-LVM-kO2GP93cMpGUHtYUCcev57k9Af2LcWSdZzKCYwYU57czijTAO3e8T0YNHLLhZcxe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-13 00:59:56.221420 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:59:56.221430 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b087737a--96b5--5170--ab1c--c312068a0bca-osd--block--b087737a--96b5--5170--ab1c--c312068a0bca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qZWD3J-GcDo-qoZV-e3Jd-h0uy-CDo5-lQ9w0F', 'scsi-0QEMU_QEMU_HARDDISK_f868cbab-65ba-4325-b003-03d97073cddb', 'scsi-SQEMU_QEMU_HARDDISK_f868cbab-65ba-4325-b003-03d97073cddb'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-13 00:59:56.221440 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:59:56.221456 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a3f219a-02e3-456c-9d7f-0c5a8049cd2b', 'scsi-SQEMU_QEMU_HARDDISK_5a3f219a-02e3-456c-9d7f-0c5a82025-09-13 00:59:56 | INFO  | Task 7d6ab84f-924f-4881-acb1-b90789aa2e9e is in state SUCCESS 2025-09-13 00:59:56.221467 | orchestrator | 049cd2b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-13 00:59:56.221477 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:59:56.221488 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-13-00-02-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-13 00:59:56.221504 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:59:56.221514 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:59:56.221528 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:59:56.221538 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:59:56.221548 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:59:56.221558 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-13 00:59:56.221579 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96', 'scsi-SQEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96-part1', 'scsi-SQEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96-part14', 'scsi-SQEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96-part15', 'scsi-SQEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96-part16', 'scsi-SQEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-13 00:59:56.221603 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4283f495--c022--53d0--a3fe--4c36d70cad8f-osd--block--4283f495--c022--53d0--a3fe--4c36d70cad8f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BTPet6-ys3j-9eIw-fFqX-JJfw-Xj04-Wo18Tl', 'scsi-0QEMU_QEMU_HARDDISK_1763dbba-d504-4b6d-865a-93cad2d65fc8', 'scsi-SQEMU_QEMU_HARDDISK_1763dbba-d504-4b6d-865a-93cad2d65fc8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-13 00:59:56.221614 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--7339ba9f--b6a9--52d7--bde1--e21ae438ff7a-osd--block--7339ba9f--b6a9--52d7--bde1--e21ae438ff7a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Vbpmmt-FNR2-sy7Q-zbki-Miq7-xgRc-ePYo6O', 'scsi-0QEMU_QEMU_HARDDISK_c5da3e8c-99b7-4761-a17c-7637f0eb6556', 'scsi-SQEMU_QEMU_HARDDISK_c5da3e8c-99b7-4761-a17c-7637f0eb6556'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-13 00:59:56.221624 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9346358d-8291-41dd-be96-0d8c84c54113', 'scsi-SQEMU_QEMU_HARDDISK_9346358d-8291-41dd-be96-0d8c84c54113'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-13 00:59:56.221641 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-13-00-01-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-13 00:59:56.221652 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:59:56.221661 | orchestrator | 2025-09-13 00:59:56.221671 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-13 00:59:56.221681 | orchestrator | Saturday 13 September 2025 00:58:02 +0000 (0:00:00.533) 0:00:15.641 **** 2025-09-13 00:59:56.221691 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--741132e6--4e77--5ad5--aab1--a12c98657a1e-osd--block--741132e6--4e77--5ad5--aab1--a12c98657a1e', 'dm-uuid-LVM-jJI5DDIpu0EItbMCyD70C1YVS3RuFgkIDzpp3s6Tq8hGjqWaSBzuz7Maducd3XlY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.221713 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c9c3f5f4--a401--5886--82fa--33c7ca08590f-osd--block--c9c3f5f4--a401--5886--82fa--33c7ca08590f', 'dm-uuid-LVM-VOGzGOt7N2MJGxjnyWXZl4x2rYodV1SMq74bPzX15UowmcKMO670XD4LKiQ0PgHi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.221723 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.221733 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.221744 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.221789 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.221800 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.221817 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.221832 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b9d4bd55--4398--5073--b181--64dcd216e500-osd--block--b9d4bd55--4398--5073--b181--64dcd216e500', 'dm-uuid-LVM-1qtX0Jo6rJTSVRgewMZPqsBZ847hNk4286JrwLgbQ49IsWeBUP1OvwJlrIeXA7Ip'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.221842 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.221853 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.221868 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b087737a--96b5--5170--ab1c--c312068a0bca-osd--block--b087737a--96b5--5170--ab1c--c312068a0bca', 'dm-uuid-LVM-4ziM4yN1AYFDpeajDs2cz5TdsO0MUbtaO80VVHJZIgmxFhRQYU2eEdcQX9mZa8AX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.221892 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d', 'scsi-SQEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d-part1', 'scsi-SQEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d-part14', 'scsi-SQEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d-part15', 'scsi-SQEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d-part16', 'scsi-SQEMU_QEMU_HARDDISK_6e080a4d-0412-4b1d-8194-d3437a56371d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.221904 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.221921 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--741132e6--4e77--5ad5--aab1--a12c98657a1e-osd--block--741132e6--4e77--5ad5--aab1--a12c98657a1e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9R2x89-CdLU-slGE-dXAv-GI8t-5WOV-d3W3gk', 'scsi-0QEMU_QEMU_HARDDISK_6e724704-b413-40a8-af93-f723a1c0b62f', 'scsi-SQEMU_QEMU_HARDDISK_6e724704-b413-40a8-af93-f723a1c0b62f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.221932 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c9c3f5f4--a401--5886--82fa--33c7ca08590f-osd--block--c9c3f5f4--a401--5886--82fa--33c7ca08590f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5RJTwx-hdJX-JGy3-lgIp-xI98-Pe6c-cJpC7g', 'scsi-0QEMU_QEMU_HARDDISK_e25c372e-2cb9-47f6-a0c5-1defd25ac71c', 'scsi-SQEMU_QEMU_HARDDISK_e25c372e-2cb9-47f6-a0c5-1defd25ac71c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.221951 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.221966 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c46d17e-adbc-49dd-8bd7-8befc745e964', 'scsi-SQEMU_QEMU_HARDDISK_0c46d17e-adbc-49dd-8bd7-8befc745e964'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.221976 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.221986 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-13-00-02-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.221996 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:59:56.222013 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.222063 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.222073 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.222088 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.222098 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4283f495--c022--53d0--a3fe--4c36d70cad8f-osd--block--4283f495--c022--53d0--a3fe--4c36d70cad8f', 'dm-uuid-LVM-kp4Dl7y4fqNhI87RlNBmLiywBxnqlzkWBTHJoQKzLiT8HpT3RobzW8ESzX9IOpdT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.222108 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.222126 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7339ba9f--b6a9--52d7--bde1--e21ae438ff7a-osd--block--7339ba9f--b6a9--52d7--bde1--e21ae438ff7a', 'dm-uuid-LVM-kO2GP93cMpGUHtYUCcev57k9Af2LcWSdZzKCYwYU57czijTAO3e8T0YNHLLhZcxe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.222148 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8', 'scsi-SQEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_8a9a8ade-9c9c-4646-9730-cf3d93ecd9e8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.222160 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.222170 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.222186 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--b9d4bd55--4398--5073--b181--64dcd216e500-osd--block--b9d4bd55--4398--5073--b181--64dcd216e500'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wgTo8x-jTlS-kgz9-0kDJ-DbDW-rvkH-juQByI', 'scsi-0QEMU_QEMU_HARDDISK_e924364d-2e91-46ce-bd4b-cca5d229d1e6', 'scsi-SQEMU_QEMU_HARDDISK_e924364d-2e91-46ce-bd4b-cca5d229d1e6'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.222203 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b087737a--96b5--5170--ab1c--c312068a0bca-osd--block--b087737a--96b5--5170--ab1c--c312068a0bca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qZWD3J-GcDo-qoZV-e3Jd-h0uy-CDo5-lQ9w0F', 'scsi-0QEMU_QEMU_HARDDISK_f868cbab-65ba-4325-b003-03d97073cddb', 'scsi-SQEMU_QEMU_HARDDISK_f868cbab-65ba-4325-b003-03d97073cddb'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.222217 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.222228 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a3f219a-02e3-456c-9d7f-0c5a8049cd2b', 'scsi-SQEMU_QEMU_HARDDISK_5a3f219a-02e3-456c-9d7f-0c5a8049cd2b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.222238 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.222253 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-13-00-02-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.222272 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:59:56.222282 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.222293 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.222307 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.222318 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.222336 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96', 'scsi-SQEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96-part1', 'scsi-SQEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96-part14', 'scsi-SQEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96-part15', 'scsi-SQEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96-part16', 'scsi-SQEMU_QEMU_HARDDISK_42f67d50-f547-4949-ad70-272b6f024e96-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.222354 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4283f495--c022--53d0--a3fe--4c36d70cad8f-osd--block--4283f495--c022--53d0--a3fe--4c36d70cad8f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BTPet6-ys3j-9eIw-fFqX-JJfw-Xj04-Wo18Tl', 'scsi-0QEMU_QEMU_HARDDISK_1763dbba-d504-4b6d-865a-93cad2d65fc8', 'scsi-SQEMU_QEMU_HARDDISK_1763dbba-d504-4b6d-865a-93cad2d65fc8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.222369 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--7339ba9f--b6a9--52d7--bde1--e21ae438ff7a-osd--block--7339ba9f--b6a9--52d7--bde1--e21ae438ff7a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Vbpmmt-FNR2-sy7Q-zbki-Miq7-xgRc-ePYo6O', 'scsi-0QEMU_QEMU_HARDDISK_c5da3e8c-99b7-4761-a17c-7637f0eb6556', 'scsi-SQEMU_QEMU_HARDDISK_c5da3e8c-99b7-4761-a17c-7637f0eb6556'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.222380 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9346358d-8291-41dd-be96-0d8c84c54113', 'scsi-SQEMU_QEMU_HARDDISK_9346358d-8291-41dd-be96-0d8c84c54113'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.222400 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-13-00-01-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-13 00:59:56.222411 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:59:56.222421 | orchestrator | 2025-09-13 00:59:56.222430 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-13 00:59:56.222440 | orchestrator | Saturday 13 September 2025 00:58:03 +0000 (0:00:00.660) 0:00:16.301 **** 2025-09-13 00:59:56.222450 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:59:56.222460 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:59:56.222469 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:59:56.222479 | orchestrator | 2025-09-13 00:59:56.222488 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-13 00:59:56.222498 | orchestrator | Saturday 13 September 2025 00:58:03 +0000 (0:00:00.712) 0:00:17.014 **** 2025-09-13 00:59:56.222508 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:59:56.222517 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:59:56.222527 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:59:56.222536 | orchestrator | 2025-09-13 00:59:56.222546 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-13 00:59:56.222555 | orchestrator | Saturday 13 September 2025 00:58:04 +0000 (0:00:00.494) 0:00:17.508 **** 2025-09-13 00:59:56.222565 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:59:56.222574 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:59:56.222584 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:59:56.222593 | orchestrator | 2025-09-13 00:59:56.222603 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-13 00:59:56.222612 | orchestrator | Saturday 13 September 2025 00:58:05 +0000 (0:00:00.653) 0:00:18.162 **** 2025-09-13 00:59:56.222622 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:59:56.222632 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:59:56.222641 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:59:56.222651 | orchestrator | 2025-09-13 00:59:56.222660 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-13 00:59:56.222670 | orchestrator | Saturday 13 September 2025 00:58:05 +0000 (0:00:00.276) 0:00:18.438 **** 2025-09-13 00:59:56.222679 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:59:56.222689 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:59:56.222699 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:59:56.222708 | orchestrator | 2025-09-13 00:59:56.222718 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-13 00:59:56.222727 | orchestrator | Saturday 13 September 2025 00:58:05 +0000 (0:00:00.429) 0:00:18.868 **** 2025-09-13 00:59:56.222737 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:59:56.222747 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:59:56.222771 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:59:56.222782 | orchestrator | 2025-09-13 00:59:56.222791 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-13 00:59:56.222801 | orchestrator | Saturday 13 September 2025 00:58:06 +0000 (0:00:00.501) 0:00:19.369 **** 2025-09-13 00:59:56.222811 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-13 00:59:56.222826 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-13 00:59:56.222836 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-13 00:59:56.222845 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-13 00:59:56.222855 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-13 00:59:56.222864 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-13 00:59:56.222874 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-13 00:59:56.222883 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-13 00:59:56.222893 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-13 00:59:56.222902 | orchestrator | 2025-09-13 00:59:56.222912 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-13 00:59:56.222922 | orchestrator | Saturday 13 September 2025 00:58:07 +0000 (0:00:00.981) 0:00:20.351 **** 2025-09-13 00:59:56.222931 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-13 00:59:56.222941 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-13 00:59:56.222950 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-13 00:59:56.222960 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:59:56.222969 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-13 00:59:56.222979 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-13 00:59:56.222989 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-13 00:59:56.222998 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:59:56.223008 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-13 00:59:56.223017 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-13 00:59:56.223027 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-13 00:59:56.223036 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:59:56.223046 | orchestrator | 2025-09-13 00:59:56.223055 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-13 00:59:56.223065 | orchestrator | Saturday 13 September 2025 00:58:07 +0000 (0:00:00.362) 0:00:20.713 **** 2025-09-13 00:59:56.223075 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 00:59:56.223084 | orchestrator | 2025-09-13 00:59:56.223094 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-13 00:59:56.223104 | orchestrator | Saturday 13 September 2025 00:58:08 +0000 (0:00:00.670) 0:00:21.384 **** 2025-09-13 00:59:56.223119 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:59:56.223129 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:59:56.223138 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:59:56.223148 | orchestrator | 2025-09-13 00:59:56.223157 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-13 00:59:56.223167 | orchestrator | Saturday 13 September 2025 00:58:08 +0000 (0:00:00.301) 0:00:21.686 **** 2025-09-13 00:59:56.223177 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:59:56.223186 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:59:56.223196 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:59:56.223206 | orchestrator | 2025-09-13 00:59:56.223215 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-13 00:59:56.223225 | orchestrator | Saturday 13 September 2025 00:58:08 +0000 (0:00:00.310) 0:00:21.996 **** 2025-09-13 00:59:56.223234 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:59:56.223244 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:59:56.223253 | orchestrator | skipping: [testbed-node-5] 2025-09-13 00:59:56.223263 | orchestrator | 2025-09-13 00:59:56.223272 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-13 00:59:56.223282 | orchestrator | Saturday 13 September 2025 00:58:09 +0000 (0:00:00.309) 0:00:22.306 **** 2025-09-13 00:59:56.223292 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:59:56.223306 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:59:56.223316 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:59:56.223325 | orchestrator | 2025-09-13 00:59:56.223335 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-13 00:59:56.223345 | orchestrator | Saturday 13 September 2025 00:58:09 +0000 (0:00:00.596) 0:00:22.902 **** 2025-09-13 00:59:56.223354 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-13 00:59:56.223364 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-13 00:59:56.223373 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-13 00:59:56.223383 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:59:56.223392 | orchestrator | 2025-09-13 00:59:56.223437 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-13 00:59:56.223448 | orchestrator | Saturday 13 September 2025 00:58:10 +0000 (0:00:00.367) 0:00:23.270 **** 2025-09-13 00:59:56.223458 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-13 00:59:56.223467 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-13 00:59:56.223477 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-13 00:59:56.223486 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:59:56.223496 | orchestrator | 2025-09-13 00:59:56.223505 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-13 00:59:56.223515 | orchestrator | Saturday 13 September 2025 00:58:10 +0000 (0:00:00.368) 0:00:23.638 **** 2025-09-13 00:59:56.223524 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-13 00:59:56.223538 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-13 00:59:56.223547 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-13 00:59:56.223557 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:59:56.223566 | orchestrator | 2025-09-13 00:59:56.223576 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-13 00:59:56.223585 | orchestrator | Saturday 13 September 2025 00:58:10 +0000 (0:00:00.353) 0:00:23.992 **** 2025-09-13 00:59:56.223595 | orchestrator | ok: [testbed-node-3] 2025-09-13 00:59:56.223605 | orchestrator | ok: [testbed-node-4] 2025-09-13 00:59:56.223614 | orchestrator | ok: [testbed-node-5] 2025-09-13 00:59:56.223624 | orchestrator | 2025-09-13 00:59:56.223633 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-13 00:59:56.223643 | orchestrator | Saturday 13 September 2025 00:58:11 +0000 (0:00:00.319) 0:00:24.312 **** 2025-09-13 00:59:56.223652 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-13 00:59:56.223661 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-13 00:59:56.223671 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-13 00:59:56.223681 | orchestrator | 2025-09-13 00:59:56.223690 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-13 00:59:56.223700 | orchestrator | Saturday 13 September 2025 00:58:11 +0000 (0:00:00.491) 0:00:24.803 **** 2025-09-13 00:59:56.223709 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-13 00:59:56.223719 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-13 00:59:56.223728 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-13 00:59:56.223738 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-13 00:59:56.223747 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-13 00:59:56.223804 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-13 00:59:56.223815 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-13 00:59:56.223825 | orchestrator | 2025-09-13 00:59:56.223835 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-13 00:59:56.223844 | orchestrator | Saturday 13 September 2025 00:58:12 +0000 (0:00:00.988) 0:00:25.791 **** 2025-09-13 00:59:56.223862 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-13 00:59:56.223871 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-13 00:59:56.223881 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-13 00:59:56.223890 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-13 00:59:56.223900 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-13 00:59:56.223909 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-13 00:59:56.223925 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-13 00:59:56.223935 | orchestrator | 2025-09-13 00:59:56.223945 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-09-13 00:59:56.223954 | orchestrator | Saturday 13 September 2025 00:58:14 +0000 (0:00:01.929) 0:00:27.721 **** 2025-09-13 00:59:56.223964 | orchestrator | skipping: [testbed-node-3] 2025-09-13 00:59:56.223974 | orchestrator | skipping: [testbed-node-4] 2025-09-13 00:59:56.223983 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-09-13 00:59:56.223993 | orchestrator | 2025-09-13 00:59:56.224002 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-09-13 00:59:56.224012 | orchestrator | Saturday 13 September 2025 00:58:15 +0000 (0:00:00.378) 0:00:28.099 **** 2025-09-13 00:59:56.224023 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-13 00:59:56.224034 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-13 00:59:56.224044 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-13 00:59:56.224054 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-13 00:59:56.224068 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-13 00:59:56.224077 | orchestrator | 2025-09-13 00:59:56.224085 | orchestrator | TASK [generate keys] *********************************************************** 2025-09-13 00:59:56.224093 | orchestrator | Saturday 13 September 2025 00:59:00 +0000 (0:00:45.555) 0:01:13.655 **** 2025-09-13 00:59:56.224100 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-13 00:59:56.224108 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-13 00:59:56.224116 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-13 00:59:56.224124 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-13 00:59:56.224131 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-13 00:59:56.224139 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-13 00:59:56.224152 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-09-13 00:59:56.224160 | orchestrator | 2025-09-13 00:59:56.224168 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-09-13 00:59:56.224175 | orchestrator | Saturday 13 September 2025 00:59:24 +0000 (0:00:23.977) 0:01:37.633 **** 2025-09-13 00:59:56.224183 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-13 00:59:56.224191 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-13 00:59:56.224199 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-13 00:59:56.224207 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-13 00:59:56.224214 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-13 00:59:56.224222 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-13 00:59:56.224230 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-13 00:59:56.224238 | orchestrator | 2025-09-13 00:59:56.224246 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-09-13 00:59:56.224253 | orchestrator | Saturday 13 September 2025 00:59:36 +0000 (0:00:12.323) 0:01:49.956 **** 2025-09-13 00:59:56.224261 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-13 00:59:56.224269 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-13 00:59:56.224277 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-13 00:59:56.224285 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-13 00:59:56.224297 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-13 00:59:56.224306 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-13 00:59:56.224314 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-13 00:59:56.224321 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-13 00:59:56.224329 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-13 00:59:56.224337 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-13 00:59:56.224345 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-13 00:59:56.224353 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-13 00:59:56.224361 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-13 00:59:56.224368 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-13 00:59:56.224376 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-13 00:59:56.224384 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-13 00:59:56.224391 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-13 00:59:56.224399 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-13 00:59:56.224407 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-09-13 00:59:56.224415 | orchestrator | 2025-09-13 00:59:56.224423 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 00:59:56.224431 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-09-13 00:59:56.224440 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-13 00:59:56.224448 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-13 00:59:56.224462 | orchestrator | 2025-09-13 00:59:56.224470 | orchestrator | 2025-09-13 00:59:56.224478 | orchestrator | 2025-09-13 00:59:56.224486 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 00:59:56.224493 | orchestrator | Saturday 13 September 2025 00:59:54 +0000 (0:00:17.146) 0:02:07.102 **** 2025-09-13 00:59:56.224501 | orchestrator | =============================================================================== 2025-09-13 00:59:56.224514 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.56s 2025-09-13 00:59:56.224522 | orchestrator | generate keys ---------------------------------------------------------- 23.98s 2025-09-13 00:59:56.224530 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.15s 2025-09-13 00:59:56.224537 | orchestrator | get keys from monitors ------------------------------------------------- 12.32s 2025-09-13 00:59:56.224545 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.01s 2025-09-13 00:59:56.224553 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.93s 2025-09-13 00:59:56.224560 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.70s 2025-09-13 00:59:56.224568 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.99s 2025-09-13 00:59:56.224576 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.98s 2025-09-13 00:59:56.224584 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.78s 2025-09-13 00:59:56.224592 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.71s 2025-09-13 00:59:56.224600 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.67s 2025-09-13 00:59:56.224607 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.67s 2025-09-13 00:59:56.224615 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.66s 2025-09-13 00:59:56.224623 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.65s 2025-09-13 00:59:56.224631 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.61s 2025-09-13 00:59:56.224638 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.60s 2025-09-13 00:59:56.224646 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.60s 2025-09-13 00:59:56.224654 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.60s 2025-09-13 00:59:56.224662 | orchestrator | ceph-facts : Set_fact build devices from resolved symlinks -------------- 0.55s 2025-09-13 00:59:56.224669 | orchestrator | 2025-09-13 00:59:56 | INFO  | Task 78c68b84-6553-4856-a653-cdb803c2f8bf is in state STARTED 2025-09-13 00:59:56.224677 | orchestrator | 2025-09-13 00:59:56 | INFO  | Wait 1 second(s) until the next check 2025-09-13 00:59:59.264714 | orchestrator | 2025-09-13 00:59:59 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 00:59:59.265094 | orchestrator | 2025-09-13 00:59:59 | INFO  | Task 83b70597-e4e6-441d-8346-75bc75e18ff7 is in state STARTED 2025-09-13 00:59:59.265718 | orchestrator | 2025-09-13 00:59:59 | INFO  | Task 78c68b84-6553-4856-a653-cdb803c2f8bf is in state STARTED 2025-09-13 00:59:59.265888 | orchestrator | 2025-09-13 00:59:59 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:00:02.293655 | orchestrator | 2025-09-13 01:00:02 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 01:00:02.294226 | orchestrator | 2025-09-13 01:00:02 | INFO  | Task 83b70597-e4e6-441d-8346-75bc75e18ff7 is in state STARTED 2025-09-13 01:00:02.295082 | orchestrator | 2025-09-13 01:00:02 | INFO  | Task 78c68b84-6553-4856-a653-cdb803c2f8bf is in state STARTED 2025-09-13 01:00:02.295111 | orchestrator | 2025-09-13 01:00:02 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:00:05.351088 | orchestrator | 2025-09-13 01:00:05 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 01:00:05.353029 | orchestrator | 2025-09-13 01:00:05 | INFO  | Task 83b70597-e4e6-441d-8346-75bc75e18ff7 is in state STARTED 2025-09-13 01:00:05.356457 | orchestrator | 2025-09-13 01:00:05 | INFO  | Task 78c68b84-6553-4856-a653-cdb803c2f8bf is in state STARTED 2025-09-13 01:00:05.356480 | orchestrator | 2025-09-13 01:00:05 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:00:08.405935 | orchestrator | 2025-09-13 01:00:08 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 01:00:08.407805 | orchestrator | 2025-09-13 01:00:08 | INFO  | Task 83b70597-e4e6-441d-8346-75bc75e18ff7 is in state STARTED 2025-09-13 01:00:08.410315 | orchestrator | 2025-09-13 01:00:08 | INFO  | Task 78c68b84-6553-4856-a653-cdb803c2f8bf is in state STARTED 2025-09-13 01:00:08.411005 | orchestrator | 2025-09-13 01:00:08 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:00:11.469055 | orchestrator | 2025-09-13 01:00:11 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 01:00:11.471043 | orchestrator | 2025-09-13 01:00:11 | INFO  | Task 83b70597-e4e6-441d-8346-75bc75e18ff7 is in state STARTED 2025-09-13 01:00:11.473452 | orchestrator | 2025-09-13 01:00:11 | INFO  | Task 78c68b84-6553-4856-a653-cdb803c2f8bf is in state STARTED 2025-09-13 01:00:11.473682 | orchestrator | 2025-09-13 01:00:11 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:00:14.538925 | orchestrator | 2025-09-13 01:00:14 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 01:00:14.540610 | orchestrator | 2025-09-13 01:00:14 | INFO  | Task 83b70597-e4e6-441d-8346-75bc75e18ff7 is in state STARTED 2025-09-13 01:00:14.542848 | orchestrator | 2025-09-13 01:00:14 | INFO  | Task 78c68b84-6553-4856-a653-cdb803c2f8bf is in state STARTED 2025-09-13 01:00:14.543107 | orchestrator | 2025-09-13 01:00:14 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:00:17.596572 | orchestrator | 2025-09-13 01:00:17 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 01:00:17.598115 | orchestrator | 2025-09-13 01:00:17 | INFO  | Task 83b70597-e4e6-441d-8346-75bc75e18ff7 is in state STARTED 2025-09-13 01:00:17.600460 | orchestrator | 2025-09-13 01:00:17 | INFO  | Task 78c68b84-6553-4856-a653-cdb803c2f8bf is in state STARTED 2025-09-13 01:00:17.600484 | orchestrator | 2025-09-13 01:00:17 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:00:20.643400 | orchestrator | 2025-09-13 01:00:20 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 01:00:20.644955 | orchestrator | 2025-09-13 01:00:20 | INFO  | Task 83b70597-e4e6-441d-8346-75bc75e18ff7 is in state STARTED 2025-09-13 01:00:20.647185 | orchestrator | 2025-09-13 01:00:20 | INFO  | Task 78c68b84-6553-4856-a653-cdb803c2f8bf is in state STARTED 2025-09-13 01:00:20.647211 | orchestrator | 2025-09-13 01:00:20 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:00:23.693720 | orchestrator | 2025-09-13 01:00:23 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 01:00:23.693875 | orchestrator | 2025-09-13 01:00:23 | INFO  | Task 83b70597-e4e6-441d-8346-75bc75e18ff7 is in state SUCCESS 2025-09-13 01:00:23.695056 | orchestrator | 2025-09-13 01:00:23 | INFO  | Task 78c68b84-6553-4856-a653-cdb803c2f8bf is in state STARTED 2025-09-13 01:00:23.695084 | orchestrator | 2025-09-13 01:00:23 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:00:26.741085 | orchestrator | 2025-09-13 01:00:26 | INFO  | Task e97f891e-1213-4c06-b3dd-03479d46547d is in state STARTED 2025-09-13 01:00:26.742430 | orchestrator | 2025-09-13 01:00:26 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 01:00:26.746539 | orchestrator | 2025-09-13 01:00:26 | INFO  | Task 78c68b84-6553-4856-a653-cdb803c2f8bf is in state STARTED 2025-09-13 01:00:26.746565 | orchestrator | 2025-09-13 01:00:26 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:00:29.799251 | orchestrator | 2025-09-13 01:00:29 | INFO  | Task e97f891e-1213-4c06-b3dd-03479d46547d is in state STARTED 2025-09-13 01:00:29.802118 | orchestrator | 2025-09-13 01:00:29 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 01:00:29.803112 | orchestrator | 2025-09-13 01:00:29 | INFO  | Task 78c68b84-6553-4856-a653-cdb803c2f8bf is in state STARTED 2025-09-13 01:00:29.803141 | orchestrator | 2025-09-13 01:00:29 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:00:32.856259 | orchestrator | 2025-09-13 01:00:32 | INFO  | Task e97f891e-1213-4c06-b3dd-03479d46547d is in state STARTED 2025-09-13 01:00:32.859016 | orchestrator | 2025-09-13 01:00:32 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 01:00:32.860690 | orchestrator | 2025-09-13 01:00:32 | INFO  | Task 78c68b84-6553-4856-a653-cdb803c2f8bf is in state STARTED 2025-09-13 01:00:32.860870 | orchestrator | 2025-09-13 01:00:32 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:00:35.907662 | orchestrator | 2025-09-13 01:00:35 | INFO  | Task e97f891e-1213-4c06-b3dd-03479d46547d is in state STARTED 2025-09-13 01:00:35.910310 | orchestrator | 2025-09-13 01:00:35 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 01:00:35.912597 | orchestrator | 2025-09-13 01:00:35 | INFO  | Task 78c68b84-6553-4856-a653-cdb803c2f8bf is in state STARTED 2025-09-13 01:00:35.912804 | orchestrator | 2025-09-13 01:00:35 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:00:38.948034 | orchestrator | 2025-09-13 01:00:38 | INFO  | Task e97f891e-1213-4c06-b3dd-03479d46547d is in state STARTED 2025-09-13 01:00:38.948253 | orchestrator | 2025-09-13 01:00:38 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 01:00:38.949541 | orchestrator | 2025-09-13 01:00:38 | INFO  | Task 78c68b84-6553-4856-a653-cdb803c2f8bf is in state STARTED 2025-09-13 01:00:38.949566 | orchestrator | 2025-09-13 01:00:38 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:00:41.992074 | orchestrator | 2025-09-13 01:00:41 | INFO  | Task e97f891e-1213-4c06-b3dd-03479d46547d is in state STARTED 2025-09-13 01:00:41.993807 | orchestrator | 2025-09-13 01:00:41 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 01:00:41.998153 | orchestrator | 2025-09-13 01:00:41 | INFO  | Task 78c68b84-6553-4856-a653-cdb803c2f8bf is in state SUCCESS 2025-09-13 01:00:42.000345 | orchestrator | 2025-09-13 01:00:42.000377 | orchestrator | 2025-09-13 01:00:42.000389 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-09-13 01:00:42.000402 | orchestrator | 2025-09-13 01:00:42.000413 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-09-13 01:00:42.000424 | orchestrator | Saturday 13 September 2025 00:59:58 +0000 (0:00:00.145) 0:00:00.146 **** 2025-09-13 01:00:42.000435 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-09-13 01:00:42.000447 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-13 01:00:42.000458 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-13 01:00:42.000495 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-09-13 01:00:42.000507 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-13 01:00:42.000518 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-09-13 01:00:42.000529 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-09-13 01:00:42.000540 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-09-13 01:00:42.000551 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-09-13 01:00:42.000562 | orchestrator | 2025-09-13 01:00:42.000572 | orchestrator | TASK [Create share directory] ************************************************** 2025-09-13 01:00:42.000583 | orchestrator | Saturday 13 September 2025 01:00:02 +0000 (0:00:04.252) 0:00:04.398 **** 2025-09-13 01:00:42.000883 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-13 01:00:42.000901 | orchestrator | 2025-09-13 01:00:42.000912 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-09-13 01:00:42.000924 | orchestrator | Saturday 13 September 2025 01:00:03 +0000 (0:00:00.897) 0:00:05.296 **** 2025-09-13 01:00:42.000935 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-09-13 01:00:42.000946 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-13 01:00:42.000957 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-13 01:00:42.000967 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-09-13 01:00:42.000978 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-13 01:00:42.000989 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-09-13 01:00:42.001000 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-09-13 01:00:42.001011 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-09-13 01:00:42.001022 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-09-13 01:00:42.001033 | orchestrator | 2025-09-13 01:00:42.001044 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-09-13 01:00:42.001055 | orchestrator | Saturday 13 September 2025 01:00:16 +0000 (0:00:13.287) 0:00:18.584 **** 2025-09-13 01:00:42.001066 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-09-13 01:00:42.001077 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-13 01:00:42.001088 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-13 01:00:42.001098 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-09-13 01:00:42.001109 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-13 01:00:42.001120 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-09-13 01:00:42.001131 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-09-13 01:00:42.001142 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-09-13 01:00:42.001152 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-09-13 01:00:42.001163 | orchestrator | 2025-09-13 01:00:42.001174 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 01:00:42.001185 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 01:00:42.001198 | orchestrator | 2025-09-13 01:00:42.001209 | orchestrator | 2025-09-13 01:00:42.001220 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 01:00:42.001242 | orchestrator | Saturday 13 September 2025 01:00:22 +0000 (0:00:06.034) 0:00:24.619 **** 2025-09-13 01:00:42.001253 | orchestrator | =============================================================================== 2025-09-13 01:00:42.001263 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.29s 2025-09-13 01:00:42.001283 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.03s 2025-09-13 01:00:42.001294 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.25s 2025-09-13 01:00:42.001305 | orchestrator | Create share directory -------------------------------------------------- 0.90s 2025-09-13 01:00:42.001316 | orchestrator | 2025-09-13 01:00:42.001327 | orchestrator | 2025-09-13 01:00:42.001338 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-13 01:00:42.001349 | orchestrator | 2025-09-13 01:00:42.001370 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-13 01:00:42.001382 | orchestrator | Saturday 13 September 2025 00:58:58 +0000 (0:00:00.258) 0:00:00.258 **** 2025-09-13 01:00:42.001393 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:00:42.001404 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:00:42.001415 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:00:42.001425 | orchestrator | 2025-09-13 01:00:42.001436 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-13 01:00:42.001447 | orchestrator | Saturday 13 September 2025 00:58:58 +0000 (0:00:00.289) 0:00:00.548 **** 2025-09-13 01:00:42.001458 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-09-13 01:00:42.001469 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-09-13 01:00:42.001480 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-09-13 01:00:42.001491 | orchestrator | 2025-09-13 01:00:42.001504 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-09-13 01:00:42.001517 | orchestrator | 2025-09-13 01:00:42.001530 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-13 01:00:42.001543 | orchestrator | Saturday 13 September 2025 00:58:59 +0000 (0:00:00.393) 0:00:00.941 **** 2025-09-13 01:00:42.001557 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 01:00:42.001569 | orchestrator | 2025-09-13 01:00:42.001582 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-09-13 01:00:42.001595 | orchestrator | Saturday 13 September 2025 00:58:59 +0000 (0:00:00.507) 0:00:01.448 **** 2025-09-13 01:00:42.001614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-13 01:00:42.001660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-13 01:00:42.001677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-13 01:00:42.001698 | orchestrator | 2025-09-13 01:00:42.001712 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-09-13 01:00:42.001725 | orchestrator | Saturday 13 September 2025 00:59:00 +0000 (0:00:01.143) 0:00:02.592 **** 2025-09-13 01:00:42.001737 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:00:42.001750 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:00:42.001762 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:00:42.001798 | orchestrator | 2025-09-13 01:00:42.001817 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-13 01:00:42.001830 | orchestrator | Saturday 13 September 2025 00:59:01 +0000 (0:00:00.418) 0:00:03.011 **** 2025-09-13 01:00:42.001844 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-13 01:00:42.001855 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-13 01:00:42.001872 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-09-13 01:00:42.001884 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-09-13 01:00:42.001894 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-09-13 01:00:42.001905 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-09-13 01:00:42.001916 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-09-13 01:00:42.001927 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-09-13 01:00:42.001938 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-13 01:00:42.001949 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-13 01:00:42.001960 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-09-13 01:00:42.001970 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-09-13 01:00:42.001981 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-09-13 01:00:42.001992 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-09-13 01:00:42.002003 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-09-13 01:00:42.002013 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-09-13 01:00:42.002075 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-13 01:00:42.002086 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-13 01:00:42.002097 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-09-13 01:00:42.002107 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-09-13 01:00:42.002118 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-09-13 01:00:42.002129 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-09-13 01:00:42.002147 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-09-13 01:00:42.002157 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-09-13 01:00:42.002169 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-09-13 01:00:42.002182 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-09-13 01:00:42.002193 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-09-13 01:00:42.002204 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-09-13 01:00:42.002215 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-09-13 01:00:42.002226 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-09-13 01:00:42.002236 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-09-13 01:00:42.002247 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-09-13 01:00:42.002258 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-09-13 01:00:42.002269 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-09-13 01:00:42.002280 | orchestrator | 2025-09-13 01:00:42.002291 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-13 01:00:42.002302 | orchestrator | Saturday 13 September 2025 00:59:01 +0000 (0:00:00.741) 0:00:03.752 **** 2025-09-13 01:00:42.002313 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:00:42.002324 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:00:42.002340 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:00:42.002351 | orchestrator | 2025-09-13 01:00:42.002362 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-13 01:00:42.002373 | orchestrator | Saturday 13 September 2025 00:59:02 +0000 (0:00:00.303) 0:00:04.056 **** 2025-09-13 01:00:42.002384 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:00:42.002395 | orchestrator | 2025-09-13 01:00:42.002406 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-13 01:00:42.002423 | orchestrator | Saturday 13 September 2025 00:59:02 +0000 (0:00:00.139) 0:00:04.195 **** 2025-09-13 01:00:42.002434 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:00:42.002445 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:00:42.002456 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:00:42.002467 | orchestrator | 2025-09-13 01:00:42.002478 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-13 01:00:42.002489 | orchestrator | Saturday 13 September 2025 00:59:02 +0000 (0:00:00.459) 0:00:04.655 **** 2025-09-13 01:00:42.002499 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:00:42.002510 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:00:42.002521 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:00:42.002532 | orchestrator | 2025-09-13 01:00:42.002543 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-13 01:00:42.002554 | orchestrator | Saturday 13 September 2025 00:59:03 +0000 (0:00:00.288) 0:00:04.944 **** 2025-09-13 01:00:42.002572 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:00:42.002583 | orchestrator | 2025-09-13 01:00:42.002594 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-13 01:00:42.002605 | orchestrator | Saturday 13 September 2025 00:59:03 +0000 (0:00:00.120) 0:00:05.065 **** 2025-09-13 01:00:42.002616 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:00:42.002627 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:00:42.002638 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:00:42.002649 | orchestrator | 2025-09-13 01:00:42.002660 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-13 01:00:42.002671 | orchestrator | Saturday 13 September 2025 00:59:03 +0000 (0:00:00.274) 0:00:05.340 **** 2025-09-13 01:00:42.002681 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:00:42.002692 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:00:42.002703 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:00:42.002714 | orchestrator | 2025-09-13 01:00:42.002725 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-13 01:00:42.002736 | orchestrator | Saturday 13 September 2025 00:59:03 +0000 (0:00:00.290) 0:00:05.630 **** 2025-09-13 01:00:42.002746 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:00:42.002757 | orchestrator | 2025-09-13 01:00:42.002785 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-13 01:00:42.002796 | orchestrator | Saturday 13 September 2025 00:59:03 +0000 (0:00:00.129) 0:00:05.759 **** 2025-09-13 01:00:42.002807 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:00:42.002818 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:00:42.002829 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:00:42.002840 | orchestrator | 2025-09-13 01:00:42.002851 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-13 01:00:42.002861 | orchestrator | Saturday 13 September 2025 00:59:04 +0000 (0:00:00.473) 0:00:06.233 **** 2025-09-13 01:00:42.002872 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:00:42.002883 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:00:42.002894 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:00:42.002905 | orchestrator | 2025-09-13 01:00:42.002916 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-13 01:00:42.002927 | orchestrator | Saturday 13 September 2025 00:59:04 +0000 (0:00:00.337) 0:00:06.571 **** 2025-09-13 01:00:42.002938 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:00:42.002949 | orchestrator | 2025-09-13 01:00:42.002960 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-13 01:00:42.002971 | orchestrator | Saturday 13 September 2025 00:59:04 +0000 (0:00:00.119) 0:00:06.690 **** 2025-09-13 01:00:42.002982 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:00:42.002993 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:00:42.003003 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:00:42.003014 | orchestrator | 2025-09-13 01:00:42.003025 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-13 01:00:42.003036 | orchestrator | Saturday 13 September 2025 00:59:05 +0000 (0:00:00.300) 0:00:06.990 **** 2025-09-13 01:00:42.003047 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:00:42.003058 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:00:42.003069 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:00:42.003080 | orchestrator | 2025-09-13 01:00:42.003091 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-13 01:00:42.003102 | orchestrator | Saturday 13 September 2025 00:59:05 +0000 (0:00:00.361) 0:00:07.351 **** 2025-09-13 01:00:42.003113 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:00:42.003124 | orchestrator | 2025-09-13 01:00:42.003134 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-13 01:00:42.003145 | orchestrator | Saturday 13 September 2025 00:59:05 +0000 (0:00:00.397) 0:00:07.748 **** 2025-09-13 01:00:42.003156 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:00:42.003167 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:00:42.003184 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:00:42.003195 | orchestrator | 2025-09-13 01:00:42.003206 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-13 01:00:42.003217 | orchestrator | Saturday 13 September 2025 00:59:06 +0000 (0:00:00.300) 0:00:08.049 **** 2025-09-13 01:00:42.003228 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:00:42.003239 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:00:42.003249 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:00:42.003260 | orchestrator | 2025-09-13 01:00:42.003271 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-13 01:00:42.003282 | orchestrator | Saturday 13 September 2025 00:59:06 +0000 (0:00:00.294) 0:00:08.344 **** 2025-09-13 01:00:42.003293 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:00:42.003304 | orchestrator | 2025-09-13 01:00:42.003314 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-13 01:00:42.003325 | orchestrator | Saturday 13 September 2025 00:59:06 +0000 (0:00:00.130) 0:00:08.474 **** 2025-09-13 01:00:42.003342 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:00:42.003353 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:00:42.003364 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:00:42.003375 | orchestrator | 2025-09-13 01:00:42.003385 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-13 01:00:42.003396 | orchestrator | Saturday 13 September 2025 00:59:06 +0000 (0:00:00.290) 0:00:08.764 **** 2025-09-13 01:00:42.003407 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:00:42.003418 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:00:42.003429 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:00:42.003440 | orchestrator | 2025-09-13 01:00:42.003456 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-13 01:00:42.003468 | orchestrator | Saturday 13 September 2025 00:59:07 +0000 (0:00:00.496) 0:00:09.261 **** 2025-09-13 01:00:42.003479 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:00:42.003490 | orchestrator | 2025-09-13 01:00:42.003501 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-13 01:00:42.003512 | orchestrator | Saturday 13 September 2025 00:59:07 +0000 (0:00:00.123) 0:00:09.385 **** 2025-09-13 01:00:42.003523 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:00:42.003534 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:00:42.003545 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:00:42.003556 | orchestrator | 2025-09-13 01:00:42.003567 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-13 01:00:42.003578 | orchestrator | Saturday 13 September 2025 00:59:07 +0000 (0:00:00.281) 0:00:09.666 **** 2025-09-13 01:00:42.003588 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:00:42.003599 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:00:42.003610 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:00:42.003621 | orchestrator | 2025-09-13 01:00:42.003632 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-13 01:00:42.003643 | orchestrator | Saturday 13 September 2025 00:59:08 +0000 (0:00:00.296) 0:00:09.963 **** 2025-09-13 01:00:42.003654 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:00:42.003665 | orchestrator | 2025-09-13 01:00:42.003676 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-13 01:00:42.003687 | orchestrator | Saturday 13 September 2025 00:59:08 +0000 (0:00:00.135) 0:00:10.099 **** 2025-09-13 01:00:42.003697 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:00:42.003708 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:00:42.003719 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:00:42.003730 | orchestrator | 2025-09-13 01:00:42.003741 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-13 01:00:42.003752 | orchestrator | Saturday 13 September 2025 00:59:08 +0000 (0:00:00.404) 0:00:10.503 **** 2025-09-13 01:00:42.003803 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:00:42.003817 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:00:42.003828 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:00:42.003846 | orchestrator | 2025-09-13 01:00:42.003857 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-13 01:00:42.003868 | orchestrator | Saturday 13 September 2025 00:59:09 +0000 (0:00:00.486) 0:00:10.990 **** 2025-09-13 01:00:42.003879 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:00:42.003891 | orchestrator | 2025-09-13 01:00:42.003902 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-13 01:00:42.003913 | orchestrator | Saturday 13 September 2025 00:59:09 +0000 (0:00:00.114) 0:00:11.104 **** 2025-09-13 01:00:42.003923 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:00:42.003934 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:00:42.003945 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:00:42.003956 | orchestrator | 2025-09-13 01:00:42.003967 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-13 01:00:42.003978 | orchestrator | Saturday 13 September 2025 00:59:09 +0000 (0:00:00.317) 0:00:11.422 **** 2025-09-13 01:00:42.003989 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:00:42.004001 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:00:42.004012 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:00:42.004022 | orchestrator | 2025-09-13 01:00:42.004033 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-13 01:00:42.004044 | orchestrator | Saturday 13 September 2025 00:59:09 +0000 (0:00:00.293) 0:00:11.716 **** 2025-09-13 01:00:42.004055 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:00:42.004066 | orchestrator | 2025-09-13 01:00:42.004078 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-13 01:00:42.004089 | orchestrator | Saturday 13 September 2025 00:59:10 +0000 (0:00:00.122) 0:00:11.838 **** 2025-09-13 01:00:42.004100 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:00:42.004111 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:00:42.004121 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:00:42.004132 | orchestrator | 2025-09-13 01:00:42.004143 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-09-13 01:00:42.004155 | orchestrator | Saturday 13 September 2025 00:59:10 +0000 (0:00:00.487) 0:00:12.326 **** 2025-09-13 01:00:42.004166 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:00:42.004176 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:00:42.004187 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:00:42.004198 | orchestrator | 2025-09-13 01:00:42.004209 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-09-13 01:00:42.004220 | orchestrator | Saturday 13 September 2025 00:59:12 +0000 (0:00:01.599) 0:00:13.925 **** 2025-09-13 01:00:42.004231 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-13 01:00:42.004242 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-13 01:00:42.004253 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-13 01:00:42.004264 | orchestrator | 2025-09-13 01:00:42.004275 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-09-13 01:00:42.004286 | orchestrator | Saturday 13 September 2025 00:59:13 +0000 (0:00:01.804) 0:00:15.730 **** 2025-09-13 01:00:42.004297 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-13 01:00:42.004319 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-13 01:00:42.004330 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-13 01:00:42.004341 | orchestrator | 2025-09-13 01:00:42.004352 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-09-13 01:00:42.004363 | orchestrator | Saturday 13 September 2025 00:59:15 +0000 (0:00:01.898) 0:00:17.628 **** 2025-09-13 01:00:42.004380 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-13 01:00:42.004398 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-13 01:00:42.004410 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-13 01:00:42.004421 | orchestrator | 2025-09-13 01:00:42.004431 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-09-13 01:00:42.004442 | orchestrator | Saturday 13 September 2025 00:59:17 +0000 (0:00:01.711) 0:00:19.340 **** 2025-09-13 01:00:42.004453 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:00:42.004464 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:00:42.004475 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:00:42.004486 | orchestrator | 2025-09-13 01:00:42.004497 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-09-13 01:00:42.004508 | orchestrator | Saturday 13 September 2025 00:59:17 +0000 (0:00:00.272) 0:00:19.613 **** 2025-09-13 01:00:42.004519 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:00:42.004530 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:00:42.004540 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:00:42.004551 | orchestrator | 2025-09-13 01:00:42.004562 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-13 01:00:42.004573 | orchestrator | Saturday 13 September 2025 00:59:18 +0000 (0:00:00.284) 0:00:19.898 **** 2025-09-13 01:00:42.004584 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 01:00:42.004595 | orchestrator | 2025-09-13 01:00:42.004606 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-09-13 01:00:42.004617 | orchestrator | Saturday 13 September 2025 00:59:18 +0000 (0:00:00.517) 0:00:20.415 **** 2025-09-13 01:00:42.004629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-13 01:00:42.004657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-13 01:00:42.004677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-13 01:00:42.004696 | orchestrator | 2025-09-13 01:00:42.004707 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-09-13 01:00:42.004723 | orchestrator | Saturday 13 September 2025 00:59:20 +0000 (0:00:01.524) 0:00:21.939 **** 2025-09-13 01:00:42.004744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-13 01:00:42.004757 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:00:42.004791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-13 01:00:42.004816 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:00:42.004829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-13 01:00:42.004841 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:00:42.004852 | orchestrator | 2025-09-13 01:00:42.004863 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-09-13 01:00:42.004874 | orchestrator | Saturday 13 September 2025 00:59:20 +0000 (0:00:00.576) 0:00:22.516 **** 2025-09-13 01:00:42.004899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-13 01:00:42.004917 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:00:42.004930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-13 01:00:42.004941 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:00:42.004966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-13 01:00:42.004985 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:00:42.004996 | orchestrator | 2025-09-13 01:00:42.005006 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-09-13 01:00:42.005018 | orchestrator | Saturday 13 September 2025 00:59:21 +0000 (0:00:00.790) 0:00:23.307 **** 2025-09-13 01:00:42.005029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-13 01:00:42.005060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-13 01:00:42.005074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-13 01:00:42.005093 | orchestrator | 2025-09-13 01:00:42.005105 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-13 01:00:42.005116 | orchestrator | Saturday 13 September 2025 00:59:22 +0000 (0:00:01.454) 0:00:24.761 **** 2025-09-13 01:00:42.005126 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:00:42.005137 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:00:42.005148 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:00:42.005158 | orchestrator | 2025-09-13 01:00:42.005169 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-13 01:00:42.005180 | orchestrator | Saturday 13 September 2025 00:59:23 +0000 (0:00:00.259) 0:00:25.020 **** 2025-09-13 01:00:42.005200 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 01:00:42.005211 | orchestrator | 2025-09-13 01:00:42.005222 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-09-13 01:00:42.005233 | orchestrator | Saturday 13 September 2025 00:59:23 +0000 (0:00:00.464) 0:00:25.485 **** 2025-09-13 01:00:42.005244 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:00:42.005254 | orchestrator | 2025-09-13 01:00:42.005270 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-09-13 01:00:42.005282 | orchestrator | Saturday 13 September 2025 00:59:25 +0000 (0:00:02.121) 0:00:27.606 **** 2025-09-13 01:00:42.005293 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:00:42.005304 | orchestrator | 2025-09-13 01:00:42.005314 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-09-13 01:00:42.005325 | orchestrator | Saturday 13 September 2025 00:59:28 +0000 (0:00:02.404) 0:00:30.010 **** 2025-09-13 01:00:42.005336 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:00:42.005347 | orchestrator | 2025-09-13 01:00:42.005357 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-13 01:00:42.005369 | orchestrator | Saturday 13 September 2025 00:59:43 +0000 (0:00:15.348) 0:00:45.359 **** 2025-09-13 01:00:42.005379 | orchestrator | 2025-09-13 01:00:42.005390 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-13 01:00:42.005401 | orchestrator | Saturday 13 September 2025 00:59:43 +0000 (0:00:00.067) 0:00:45.426 **** 2025-09-13 01:00:42.005412 | orchestrator | 2025-09-13 01:00:42.005423 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-13 01:00:42.005433 | orchestrator | Saturday 13 September 2025 00:59:43 +0000 (0:00:00.062) 0:00:45.488 **** 2025-09-13 01:00:42.005444 | orchestrator | 2025-09-13 01:00:42.005455 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-09-13 01:00:42.005466 | orchestrator | Saturday 13 September 2025 00:59:43 +0000 (0:00:00.082) 0:00:45.571 **** 2025-09-13 01:00:42.005476 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:00:42.005487 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:00:42.005498 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:00:42.005508 | orchestrator | 2025-09-13 01:00:42.005519 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 01:00:42.005530 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-09-13 01:00:42.005541 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-13 01:00:42.005552 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-13 01:00:42.005569 | orchestrator | 2025-09-13 01:00:42.005580 | orchestrator | 2025-09-13 01:00:42.005591 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 01:00:42.005602 | orchestrator | Saturday 13 September 2025 01:00:39 +0000 (0:00:55.277) 0:01:40.849 **** 2025-09-13 01:00:42.005613 | orchestrator | =============================================================================== 2025-09-13 01:00:42.005623 | orchestrator | horizon : Restart horizon container ------------------------------------ 55.28s 2025-09-13 01:00:42.005634 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.35s 2025-09-13 01:00:42.005645 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.40s 2025-09-13 01:00:42.005656 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.12s 2025-09-13 01:00:42.005667 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.90s 2025-09-13 01:00:42.005677 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.80s 2025-09-13 01:00:42.005688 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.71s 2025-09-13 01:00:42.005699 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.60s 2025-09-13 01:00:42.005710 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.52s 2025-09-13 01:00:42.005721 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.45s 2025-09-13 01:00:42.005731 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.14s 2025-09-13 01:00:42.005742 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.79s 2025-09-13 01:00:42.005753 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.74s 2025-09-13 01:00:42.005810 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.58s 2025-09-13 01:00:42.005824 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.52s 2025-09-13 01:00:42.005835 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.51s 2025-09-13 01:00:42.005846 | orchestrator | horizon : Update policy file name --------------------------------------- 0.50s 2025-09-13 01:00:42.005857 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.49s 2025-09-13 01:00:42.005867 | orchestrator | horizon : Update policy file name --------------------------------------- 0.49s 2025-09-13 01:00:42.005878 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.47s 2025-09-13 01:00:42.005889 | orchestrator | 2025-09-13 01:00:41 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:00:45.039040 | orchestrator | 2025-09-13 01:00:45 | INFO  | Task e97f891e-1213-4c06-b3dd-03479d46547d is in state STARTED 2025-09-13 01:00:45.040333 | orchestrator | 2025-09-13 01:00:45 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 01:00:45.040365 | orchestrator | 2025-09-13 01:00:45 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:00:48.072487 | orchestrator | 2025-09-13 01:00:48 | INFO  | Task e97f891e-1213-4c06-b3dd-03479d46547d is in state STARTED 2025-09-13 01:00:48.074069 | orchestrator | 2025-09-13 01:00:48 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 01:00:48.074099 | orchestrator | 2025-09-13 01:00:48 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:00:51.120525 | orchestrator | 2025-09-13 01:00:51 | INFO  | Task e97f891e-1213-4c06-b3dd-03479d46547d is in state STARTED 2025-09-13 01:00:51.122241 | orchestrator | 2025-09-13 01:00:51 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 01:00:51.122269 | orchestrator | 2025-09-13 01:00:51 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:00:54.161000 | orchestrator | 2025-09-13 01:00:54 | INFO  | Task e97f891e-1213-4c06-b3dd-03479d46547d is in state STARTED 2025-09-13 01:00:54.162228 | orchestrator | 2025-09-13 01:00:54 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 01:00:54.162254 | orchestrator | 2025-09-13 01:00:54 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:00:57.211764 | orchestrator | 2025-09-13 01:00:57 | INFO  | Task e97f891e-1213-4c06-b3dd-03479d46547d is in state STARTED 2025-09-13 01:00:57.213617 | orchestrator | 2025-09-13 01:00:57 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 01:00:57.213651 | orchestrator | 2025-09-13 01:00:57 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:01:00.256217 | orchestrator | 2025-09-13 01:01:00 | INFO  | Task e97f891e-1213-4c06-b3dd-03479d46547d is in state STARTED 2025-09-13 01:01:00.259039 | orchestrator | 2025-09-13 01:01:00 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 01:01:00.259649 | orchestrator | 2025-09-13 01:01:00 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:01:03.296817 | orchestrator | 2025-09-13 01:01:03 | INFO  | Task e97f891e-1213-4c06-b3dd-03479d46547d is in state STARTED 2025-09-13 01:01:03.298387 | orchestrator | 2025-09-13 01:01:03 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 01:01:03.298656 | orchestrator | 2025-09-13 01:01:03 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:01:06.338571 | orchestrator | 2025-09-13 01:01:06 | INFO  | Task e97f891e-1213-4c06-b3dd-03479d46547d is in state STARTED 2025-09-13 01:01:06.339350 | orchestrator | 2025-09-13 01:01:06 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 01:01:06.339835 | orchestrator | 2025-09-13 01:01:06 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:01:09.375715 | orchestrator | 2025-09-13 01:01:09 | INFO  | Task e97f891e-1213-4c06-b3dd-03479d46547d is in state STARTED 2025-09-13 01:01:09.376930 | orchestrator | 2025-09-13 01:01:09 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 01:01:09.376964 | orchestrator | 2025-09-13 01:01:09 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:01:12.423465 | orchestrator | 2025-09-13 01:01:12 | INFO  | Task e97f891e-1213-4c06-b3dd-03479d46547d is in state STARTED 2025-09-13 01:01:12.424663 | orchestrator | 2025-09-13 01:01:12 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 01:01:12.424852 | orchestrator | 2025-09-13 01:01:12 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:01:15.465693 | orchestrator | 2025-09-13 01:01:15 | INFO  | Task e97f891e-1213-4c06-b3dd-03479d46547d is in state STARTED 2025-09-13 01:01:15.466654 | orchestrator | 2025-09-13 01:01:15 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 01:01:15.466822 | orchestrator | 2025-09-13 01:01:15 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:01:18.505344 | orchestrator | 2025-09-13 01:01:18 | INFO  | Task e97f891e-1213-4c06-b3dd-03479d46547d is in state SUCCESS 2025-09-13 01:01:18.505990 | orchestrator | 2025-09-13 01:01:18 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:01:18.507197 | orchestrator | 2025-09-13 01:01:18 | INFO  | Task a7c63c85-a003-4ea4-a6d3-1b83c98a2cdd is in state STARTED 2025-09-13 01:01:18.509116 | orchestrator | 2025-09-13 01:01:18 | INFO  | Task a3f88062-ef98-4fe1-b807-934799561ecc is in state STARTED 2025-09-13 01:01:18.509763 | orchestrator | 2025-09-13 01:01:18 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 01:01:18.509987 | orchestrator | 2025-09-13 01:01:18 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:01:21.548177 | orchestrator | 2025-09-13 01:01:21 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:01:21.550172 | orchestrator | 2025-09-13 01:01:21 | INFO  | Task a7c63c85-a003-4ea4-a6d3-1b83c98a2cdd is in state SUCCESS 2025-09-13 01:01:21.551467 | orchestrator | 2025-09-13 01:01:21 | INFO  | Task a3f88062-ef98-4fe1-b807-934799561ecc is in state STARTED 2025-09-13 01:01:21.553525 | orchestrator | 2025-09-13 01:01:21 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 01:01:21.554423 | orchestrator | 2025-09-13 01:01:21 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:01:24.663728 | orchestrator | 2025-09-13 01:01:24 | INFO  | Task e5f17f68-d3e2-4148-a75b-f9c95536ba92 is in state STARTED 2025-09-13 01:01:24.663882 | orchestrator | 2025-09-13 01:01:24 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:01:24.663898 | orchestrator | 2025-09-13 01:01:24 | INFO  | Task a3f88062-ef98-4fe1-b807-934799561ecc is in state STARTED 2025-09-13 01:01:24.663910 | orchestrator | 2025-09-13 01:01:24 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 01:01:24.663921 | orchestrator | 2025-09-13 01:01:24 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:01:24.663932 | orchestrator | 2025-09-13 01:01:24 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:01:27.649620 | orchestrator | 2025-09-13 01:01:27 | INFO  | Task e5f17f68-d3e2-4148-a75b-f9c95536ba92 is in state STARTED 2025-09-13 01:01:27.649713 | orchestrator | 2025-09-13 01:01:27 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:01:27.650654 | orchestrator | 2025-09-13 01:01:27 | INFO  | Task a3f88062-ef98-4fe1-b807-934799561ecc is in state STARTED 2025-09-13 01:01:27.651593 | orchestrator | 2025-09-13 01:01:27 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state STARTED 2025-09-13 01:01:27.652400 | orchestrator | 2025-09-13 01:01:27 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:01:27.653043 | orchestrator | 2025-09-13 01:01:27 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:01:30.705332 | orchestrator | 2025-09-13 01:01:30 | INFO  | Task e5f17f68-d3e2-4148-a75b-f9c95536ba92 is in state STARTED 2025-09-13 01:01:30.705425 | orchestrator | 2025-09-13 01:01:30 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:01:30.706634 | orchestrator | 2025-09-13 01:01:30 | INFO  | Task a3f88062-ef98-4fe1-b807-934799561ecc is in state STARTED 2025-09-13 01:01:30.712238 | orchestrator | 2025-09-13 01:01:30.712290 | orchestrator | 2025-09-13 01:01:30.712303 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-09-13 01:01:30.712315 | orchestrator | 2025-09-13 01:01:30.712326 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-09-13 01:01:30.712338 | orchestrator | Saturday 13 September 2025 01:00:26 +0000 (0:00:00.215) 0:00:00.215 **** 2025-09-13 01:01:30.712350 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-09-13 01:01:30.712363 | orchestrator | 2025-09-13 01:01:30.712374 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-09-13 01:01:30.712385 | orchestrator | Saturday 13 September 2025 01:00:26 +0000 (0:00:00.223) 0:00:00.438 **** 2025-09-13 01:01:30.712397 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-09-13 01:01:30.712408 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-09-13 01:01:30.712442 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-09-13 01:01:30.712454 | orchestrator | 2025-09-13 01:01:30.712465 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-09-13 01:01:30.712476 | orchestrator | Saturday 13 September 2025 01:00:27 +0000 (0:00:01.250) 0:00:01.689 **** 2025-09-13 01:01:30.712487 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-09-13 01:01:30.712498 | orchestrator | 2025-09-13 01:01:30.712509 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-09-13 01:01:30.712520 | orchestrator | Saturday 13 September 2025 01:00:29 +0000 (0:00:01.174) 0:00:02.863 **** 2025-09-13 01:01:30.712531 | orchestrator | changed: [testbed-manager] 2025-09-13 01:01:30.712542 | orchestrator | 2025-09-13 01:01:30.712553 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-09-13 01:01:30.712564 | orchestrator | Saturday 13 September 2025 01:00:30 +0000 (0:00:01.044) 0:00:03.908 **** 2025-09-13 01:01:30.712589 | orchestrator | changed: [testbed-manager] 2025-09-13 01:01:30.712601 | orchestrator | 2025-09-13 01:01:30.712611 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-09-13 01:01:30.712693 | orchestrator | Saturday 13 September 2025 01:00:30 +0000 (0:00:00.826) 0:00:04.734 **** 2025-09-13 01:01:30.712997 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-09-13 01:01:30.713020 | orchestrator | ok: [testbed-manager] 2025-09-13 01:01:30.713032 | orchestrator | 2025-09-13 01:01:30.713046 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-09-13 01:01:30.713059 | orchestrator | Saturday 13 September 2025 01:01:06 +0000 (0:00:35.417) 0:00:40.152 **** 2025-09-13 01:01:30.713072 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-09-13 01:01:30.713085 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-09-13 01:01:30.713099 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-09-13 01:01:30.713111 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-09-13 01:01:30.713124 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-09-13 01:01:30.713136 | orchestrator | 2025-09-13 01:01:30.713149 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-09-13 01:01:30.713162 | orchestrator | Saturday 13 September 2025 01:01:10 +0000 (0:00:03.944) 0:00:44.096 **** 2025-09-13 01:01:30.713175 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-09-13 01:01:30.713187 | orchestrator | 2025-09-13 01:01:30.713198 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-09-13 01:01:30.713209 | orchestrator | Saturday 13 September 2025 01:01:10 +0000 (0:00:00.433) 0:00:44.530 **** 2025-09-13 01:01:30.713220 | orchestrator | skipping: [testbed-manager] 2025-09-13 01:01:30.713231 | orchestrator | 2025-09-13 01:01:30.713241 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-09-13 01:01:30.713252 | orchestrator | Saturday 13 September 2025 01:01:10 +0000 (0:00:00.112) 0:00:44.643 **** 2025-09-13 01:01:30.713263 | orchestrator | skipping: [testbed-manager] 2025-09-13 01:01:30.713274 | orchestrator | 2025-09-13 01:01:30.713285 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-09-13 01:01:30.713296 | orchestrator | Saturday 13 September 2025 01:01:11 +0000 (0:00:00.282) 0:00:44.925 **** 2025-09-13 01:01:30.713307 | orchestrator | changed: [testbed-manager] 2025-09-13 01:01:30.713318 | orchestrator | 2025-09-13 01:01:30.713329 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-09-13 01:01:30.713340 | orchestrator | Saturday 13 September 2025 01:01:12 +0000 (0:00:01.682) 0:00:46.608 **** 2025-09-13 01:01:30.713351 | orchestrator | changed: [testbed-manager] 2025-09-13 01:01:30.713362 | orchestrator | 2025-09-13 01:01:30.713373 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-09-13 01:01:30.713384 | orchestrator | Saturday 13 September 2025 01:01:13 +0000 (0:00:00.702) 0:00:47.310 **** 2025-09-13 01:01:30.713407 | orchestrator | changed: [testbed-manager] 2025-09-13 01:01:30.713418 | orchestrator | 2025-09-13 01:01:30.713429 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-09-13 01:01:30.713440 | orchestrator | Saturday 13 September 2025 01:01:14 +0000 (0:00:00.590) 0:00:47.901 **** 2025-09-13 01:01:30.713451 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-09-13 01:01:30.713461 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-09-13 01:01:30.713472 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-09-13 01:01:30.713484 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-09-13 01:01:30.713494 | orchestrator | 2025-09-13 01:01:30.713506 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 01:01:30.713568 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-13 01:01:30.713654 | orchestrator | 2025-09-13 01:01:30.713666 | orchestrator | 2025-09-13 01:01:30.713693 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 01:01:30.713705 | orchestrator | Saturday 13 September 2025 01:01:15 +0000 (0:00:01.339) 0:00:49.241 **** 2025-09-13 01:01:30.713716 | orchestrator | =============================================================================== 2025-09-13 01:01:30.713727 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 35.42s 2025-09-13 01:01:30.713739 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.94s 2025-09-13 01:01:30.713750 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.68s 2025-09-13 01:01:30.713760 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.34s 2025-09-13 01:01:30.713771 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.25s 2025-09-13 01:01:30.713828 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.17s 2025-09-13 01:01:30.713841 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.04s 2025-09-13 01:01:30.713852 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.83s 2025-09-13 01:01:30.713863 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.70s 2025-09-13 01:01:30.713874 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.59s 2025-09-13 01:01:30.713885 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.43s 2025-09-13 01:01:30.713895 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.28s 2025-09-13 01:01:30.713906 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.22s 2025-09-13 01:01:30.713917 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.11s 2025-09-13 01:01:30.713928 | orchestrator | 2025-09-13 01:01:30.713939 | orchestrator | 2025-09-13 01:01:30.713950 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-13 01:01:30.713960 | orchestrator | 2025-09-13 01:01:30.713971 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-13 01:01:30.713982 | orchestrator | Saturday 13 September 2025 01:01:19 +0000 (0:00:00.162) 0:00:00.162 **** 2025-09-13 01:01:30.713993 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:01:30.714012 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:01:30.714067 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:01:30.714079 | orchestrator | 2025-09-13 01:01:30.714090 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-13 01:01:30.714101 | orchestrator | Saturday 13 September 2025 01:01:19 +0000 (0:00:00.273) 0:00:00.436 **** 2025-09-13 01:01:30.714112 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-13 01:01:30.714123 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-13 01:01:30.714133 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-13 01:01:30.714144 | orchestrator | 2025-09-13 01:01:30.714155 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-09-13 01:01:30.714176 | orchestrator | 2025-09-13 01:01:30.714187 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-09-13 01:01:30.714197 | orchestrator | Saturday 13 September 2025 01:01:20 +0000 (0:00:00.648) 0:00:01.084 **** 2025-09-13 01:01:30.714208 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:01:30.714219 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:01:30.714230 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:01:30.714241 | orchestrator | 2025-09-13 01:01:30.714252 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 01:01:30.714264 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 01:01:30.714275 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 01:01:30.714286 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 01:01:30.714298 | orchestrator | 2025-09-13 01:01:30.714308 | orchestrator | 2025-09-13 01:01:30.714319 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 01:01:30.714330 | orchestrator | Saturday 13 September 2025 01:01:20 +0000 (0:00:00.768) 0:00:01.853 **** 2025-09-13 01:01:30.714341 | orchestrator | =============================================================================== 2025-09-13 01:01:30.714352 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.77s 2025-09-13 01:01:30.714363 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.65s 2025-09-13 01:01:30.714374 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2025-09-13 01:01:30.714385 | orchestrator | 2025-09-13 01:01:30.714396 | orchestrator | 2025-09-13 01:01:30 | INFO  | Task 879f6d87-e44e-4533-a267-40ef99b09c2b is in state SUCCESS 2025-09-13 01:01:30.714939 | orchestrator | 2025-09-13 01:01:30.714972 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-13 01:01:30.714983 | orchestrator | 2025-09-13 01:01:30.714994 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-13 01:01:30.715005 | orchestrator | Saturday 13 September 2025 00:58:58 +0000 (0:00:00.284) 0:00:00.284 **** 2025-09-13 01:01:30.715016 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:01:30.715027 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:01:30.715038 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:01:30.715049 | orchestrator | 2025-09-13 01:01:30.715060 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-13 01:01:30.715071 | orchestrator | Saturday 13 September 2025 00:58:58 +0000 (0:00:00.279) 0:00:00.563 **** 2025-09-13 01:01:30.715082 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-13 01:01:30.715093 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-13 01:01:30.715998 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-13 01:01:30.716025 | orchestrator | 2025-09-13 01:01:30.716036 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-09-13 01:01:30.716047 | orchestrator | 2025-09-13 01:01:30.716058 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-13 01:01:30.716069 | orchestrator | Saturday 13 September 2025 00:58:59 +0000 (0:00:00.430) 0:00:00.993 **** 2025-09-13 01:01:30.716080 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 01:01:30.716092 | orchestrator | 2025-09-13 01:01:30.716102 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-09-13 01:01:30.716113 | orchestrator | Saturday 13 September 2025 00:58:59 +0000 (0:00:00.550) 0:00:01.544 **** 2025-09-13 01:01:30.716131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-13 01:01:30.716172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-13 01:01:30.716229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-13 01:01:30.716245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-13 01:01:30.716258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-13 01:01:30.716277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-13 01:01:30.716295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-13 01:01:30.716306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-13 01:01:30.716318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-13 01:01:30.716329 | orchestrator | 2025-09-13 01:01:30.716341 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-09-13 01:01:30.716352 | orchestrator | Saturday 13 September 2025 00:59:01 +0000 (0:00:01.839) 0:00:03.383 **** 2025-09-13 01:01:30.716368 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-09-13 01:01:30.716380 | orchestrator | 2025-09-13 01:01:30.716391 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-09-13 01:01:30.716402 | orchestrator | Saturday 13 September 2025 00:59:02 +0000 (0:00:00.820) 0:00:04.204 **** 2025-09-13 01:01:30.716413 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:01:30.716424 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:01:30.716435 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:01:30.716446 | orchestrator | 2025-09-13 01:01:30.716457 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-09-13 01:01:30.716468 | orchestrator | Saturday 13 September 2025 00:59:03 +0000 (0:00:00.472) 0:00:04.676 **** 2025-09-13 01:01:30.716479 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-13 01:01:30.716490 | orchestrator | 2025-09-13 01:01:30.716502 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-13 01:01:30.716513 | orchestrator | Saturday 13 September 2025 00:59:03 +0000 (0:00:00.706) 0:00:05.382 **** 2025-09-13 01:01:30.716538 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 01:01:30.716550 | orchestrator | 2025-09-13 01:01:30.716561 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-09-13 01:01:30.716571 | orchestrator | Saturday 13 September 2025 00:59:04 +0000 (0:00:00.504) 0:00:05.887 **** 2025-09-13 01:01:30.716584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-13 01:01:30.716601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-13 01:01:30.716614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-13 01:01:30.716635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-13 01:01:30.716654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-13 01:01:30.716666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-13 01:01:30.716682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-13 01:01:30.716694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-13 01:01:30.716706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-13 01:01:30.716717 | orchestrator | 2025-09-13 01:01:30.716728 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-09-13 01:01:30.716739 | orchestrator | Saturday 13 September 2025 00:59:07 +0000 (0:00:03.371) 0:00:09.259 **** 2025-09-13 01:01:30.716758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-13 01:01:30.716777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-13 01:01:30.716818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-13 01:01:30.716830 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:01:30.716847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-13 01:01:30.716860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-13 01:01:30.716881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-13 01:01:30.716900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-13 01:01:30.716911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-13 01:01:30.716923 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:01:30.716939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-13 01:01:30.716951 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:01:30.716962 | orchestrator | 2025-09-13 01:01:30.716973 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-09-13 01:01:30.716984 | orchestrator | Saturday 13 September 2025 00:59:08 +0000 (0:00:00.724) 0:00:09.983 **** 2025-09-13 01:01:30.716996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-13 01:01:30.717015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-13 01:01:30.717034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-13 01:01:30.717045 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:01:30.717057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-13 01:01:30.717074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-13 01:01:30.717086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-13 01:01:30.717097 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:01:30.717109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-13 01:01:30.717134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-13 01:01:30.717146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-13 01:01:30.717157 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:01:30.717168 | orchestrator | 2025-09-13 01:01:30.717180 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-09-13 01:01:30.717191 | orchestrator | Saturday 13 September 2025 00:59:09 +0000 (0:00:00.730) 0:00:10.714 **** 2025-09-13 01:01:30.717207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-13 01:01:30.717220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-13 01:01:30.717245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-13 01:01:30.717258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-13 01:01:30.717269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-13 01:01:30.717281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-13 01:01:30.717297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-13 01:01:30.717309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-13 01:01:30.717326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-13 01:01:30.717337 | orchestrator | 2025-09-13 01:01:30.717349 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-09-13 01:01:30.717360 | orchestrator | Saturday 13 September 2025 00:59:12 +0000 (0:00:03.048) 0:00:13.762 **** 2025-09-13 01:01:30.717379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-13 01:01:30.717391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-13 01:01:30.717412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-13 01:01:30.717425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-13 01:01:30.717450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-13 01:01:30.717462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-13 01:01:30.717473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-13 01:01:30.717485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-13 01:01:30.717501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-13 01:01:30.717513 | orchestrator | 2025-09-13 01:01:30.717524 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-09-13 01:01:30.717535 | orchestrator | Saturday 13 September 2025 00:59:16 +0000 (0:00:04.567) 0:00:18.329 **** 2025-09-13 01:01:30.717552 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:01:30.717563 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:01:30.717574 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:01:30.717585 | orchestrator | 2025-09-13 01:01:30.717596 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-09-13 01:01:30.717607 | orchestrator | Saturday 13 September 2025 00:59:17 +0000 (0:00:01.315) 0:00:19.645 **** 2025-09-13 01:01:30.717618 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:01:30.717629 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:01:30.717640 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:01:30.717651 | orchestrator | 2025-09-13 01:01:30.717662 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-09-13 01:01:30.717673 | orchestrator | Saturday 13 September 2025 00:59:18 +0000 (0:00:00.475) 0:00:20.120 **** 2025-09-13 01:01:30.717684 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:01:30.717695 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:01:30.717706 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:01:30.717716 | orchestrator | 2025-09-13 01:01:30.717728 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-09-13 01:01:30.717739 | orchestrator | Saturday 13 September 2025 00:59:18 +0000 (0:00:00.273) 0:00:20.393 **** 2025-09-13 01:01:30.717749 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:01:30.717760 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:01:30.717771 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:01:30.717805 | orchestrator | 2025-09-13 01:01:30.717818 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-09-13 01:01:30.717829 | orchestrator | Saturday 13 September 2025 00:59:19 +0000 (0:00:00.357) 0:00:20.750 **** 2025-09-13 01:01:30.717849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-13 01:01:30.717861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-13 01:01:30.717878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-13 01:01:30.717898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-13 01:01:30.717911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-13 01:01:30.717928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-13 01:01:30.717940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-13 01:01:30.717952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-13 01:01:30.717968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-13 01:01:30.717985 | orchestrator | 2025-09-13 01:01:30.717997 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-13 01:01:30.718008 | orchestrator | Saturday 13 September 2025 00:59:21 +0000 (0:00:02.167) 0:00:22.918 **** 2025-09-13 01:01:30.718050 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:01:30.718064 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:01:30.718075 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:01:30.718086 | orchestrator | 2025-09-13 01:01:30.718097 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-09-13 01:01:30.718109 | orchestrator | Saturday 13 September 2025 00:59:21 +0000 (0:00:00.278) 0:00:23.196 **** 2025-09-13 01:01:30.718119 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-13 01:01:30.718131 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-13 01:01:30.718142 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-13 01:01:30.718153 | orchestrator | 2025-09-13 01:01:30.718164 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-09-13 01:01:30.718174 | orchestrator | Saturday 13 September 2025 00:59:23 +0000 (0:00:01.667) 0:00:24.864 **** 2025-09-13 01:01:30.718185 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-13 01:01:30.718197 | orchestrator | 2025-09-13 01:01:30.718208 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-09-13 01:01:30.718219 | orchestrator | Saturday 13 September 2025 00:59:24 +0000 (0:00:00.893) 0:00:25.757 **** 2025-09-13 01:01:30.718230 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:01:30.718241 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:01:30.718252 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:01:30.718263 | orchestrator | 2025-09-13 01:01:30.718274 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-09-13 01:01:30.718284 | orchestrator | Saturday 13 September 2025 00:59:24 +0000 (0:00:00.704) 0:00:26.462 **** 2025-09-13 01:01:30.718295 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-13 01:01:30.718306 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-13 01:01:30.718317 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-13 01:01:30.718328 | orchestrator | 2025-09-13 01:01:30.718339 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-09-13 01:01:30.718350 | orchestrator | Saturday 13 September 2025 00:59:25 +0000 (0:00:00.906) 0:00:27.368 **** 2025-09-13 01:01:30.718368 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:01:30.718379 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:01:30.718390 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:01:30.718401 | orchestrator | 2025-09-13 01:01:30.718412 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-09-13 01:01:30.718423 | orchestrator | Saturday 13 September 2025 00:59:25 +0000 (0:00:00.257) 0:00:27.626 **** 2025-09-13 01:01:30.718434 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-13 01:01:30.718445 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-13 01:01:30.718456 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-13 01:01:30.718467 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-13 01:01:30.718485 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-13 01:01:30.718496 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-13 01:01:30.718507 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-13 01:01:30.718518 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-13 01:01:30.718529 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-13 01:01:30.718540 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-13 01:01:30.718551 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-13 01:01:30.718562 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-13 01:01:30.718572 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-13 01:01:30.718583 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-13 01:01:30.718594 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-13 01:01:30.718605 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-13 01:01:30.718616 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-13 01:01:30.718628 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-13 01:01:30.718639 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-13 01:01:30.718650 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-13 01:01:30.718666 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-13 01:01:30.718677 | orchestrator | 2025-09-13 01:01:30.718688 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-09-13 01:01:30.718699 | orchestrator | Saturday 13 September 2025 00:59:34 +0000 (0:00:08.545) 0:00:36.171 **** 2025-09-13 01:01:30.718710 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-13 01:01:30.718721 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-13 01:01:30.718732 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-13 01:01:30.718743 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-13 01:01:30.718753 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-13 01:01:30.718764 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-13 01:01:30.718775 | orchestrator | 2025-09-13 01:01:30.718811 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-09-13 01:01:30.718823 | orchestrator | Saturday 13 September 2025 00:59:37 +0000 (0:00:02.689) 0:00:38.860 **** 2025-09-13 01:01:30.718841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-13 01:01:30.718862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-13 01:01:30.718875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-13 01:01:30.718892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-13 01:01:30.718904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-13 01:01:30.718916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-13 01:01:30.718954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-13 01:01:30.718973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-13 01:01:30.718992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-13 01:01:30.719010 | orchestrator | 2025-09-13 01:01:30.719028 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-13 01:01:30.719047 | orchestrator | Saturday 13 September 2025 00:59:39 +0000 (0:00:02.170) 0:00:41.031 **** 2025-09-13 01:01:30.719065 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:01:30.719085 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:01:30.719103 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:01:30.719121 | orchestrator | 2025-09-13 01:01:30.719135 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-09-13 01:01:30.719146 | orchestrator | Saturday 13 September 2025 00:59:39 +0000 (0:00:00.295) 0:00:41.327 **** 2025-09-13 01:01:30.719157 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:01:30.719168 | orchestrator | 2025-09-13 01:01:30.719185 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-09-13 01:01:30.719196 | orchestrator | Saturday 13 September 2025 00:59:41 +0000 (0:00:02.204) 0:00:43.532 **** 2025-09-13 01:01:30.719207 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:01:30.719218 | orchestrator | 2025-09-13 01:01:30.719229 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-09-13 01:01:30.719240 | orchestrator | Saturday 13 September 2025 00:59:43 +0000 (0:00:02.134) 0:00:45.666 **** 2025-09-13 01:01:30.719250 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:01:30.719261 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:01:30.719272 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:01:30.719282 | orchestrator | 2025-09-13 01:01:30.719293 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-09-13 01:01:30.719304 | orchestrator | Saturday 13 September 2025 00:59:44 +0000 (0:00:00.832) 0:00:46.499 **** 2025-09-13 01:01:30.719315 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:01:30.719335 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:01:30.719346 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:01:30.719356 | orchestrator | 2025-09-13 01:01:30.719367 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-09-13 01:01:30.719378 | orchestrator | Saturday 13 September 2025 00:59:45 +0000 (0:00:00.609) 0:00:47.108 **** 2025-09-13 01:01:30.719389 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:01:30.719400 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:01:30.719411 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:01:30.719421 | orchestrator | 2025-09-13 01:01:30.719432 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-09-13 01:01:30.719443 | orchestrator | Saturday 13 September 2025 00:59:45 +0000 (0:00:00.345) 0:00:47.453 **** 2025-09-13 01:01:30.719454 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:01:30.719465 | orchestrator | 2025-09-13 01:01:30.719475 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-09-13 01:01:30.719486 | orchestrator | Saturday 13 September 2025 00:59:59 +0000 (0:00:13.639) 0:01:01.093 **** 2025-09-13 01:01:30.719497 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:01:30.719508 | orchestrator | 2025-09-13 01:01:30.719519 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-13 01:01:30.719530 | orchestrator | Saturday 13 September 2025 01:00:09 +0000 (0:00:10.029) 0:01:11.123 **** 2025-09-13 01:01:30.719540 | orchestrator | 2025-09-13 01:01:30.719551 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-13 01:01:30.719562 | orchestrator | Saturday 13 September 2025 01:00:09 +0000 (0:00:00.077) 0:01:11.200 **** 2025-09-13 01:01:30.719573 | orchestrator | 2025-09-13 01:01:30.719584 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-13 01:01:30.719594 | orchestrator | Saturday 13 September 2025 01:00:09 +0000 (0:00:00.070) 0:01:11.270 **** 2025-09-13 01:01:30.719605 | orchestrator | 2025-09-13 01:01:30.719623 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-09-13 01:01:30.719635 | orchestrator | Saturday 13 September 2025 01:00:09 +0000 (0:00:00.075) 0:01:11.346 **** 2025-09-13 01:01:30.719653 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:01:30.719671 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:01:30.719689 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:01:30.719707 | orchestrator | 2025-09-13 01:01:30.719726 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-09-13 01:01:30.719746 | orchestrator | Saturday 13 September 2025 01:00:29 +0000 (0:00:20.288) 0:01:31.634 **** 2025-09-13 01:01:30.719765 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:01:30.719777 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:01:30.719856 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:01:30.719868 | orchestrator | 2025-09-13 01:01:30.719879 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-09-13 01:01:30.719890 | orchestrator | Saturday 13 September 2025 01:00:35 +0000 (0:00:05.144) 0:01:36.779 **** 2025-09-13 01:01:30.719901 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:01:30.719912 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:01:30.719922 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:01:30.719933 | orchestrator | 2025-09-13 01:01:30.719944 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-13 01:01:30.719955 | orchestrator | Saturday 13 September 2025 01:00:42 +0000 (0:00:07.389) 0:01:44.169 **** 2025-09-13 01:01:30.719966 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 01:01:30.719977 | orchestrator | 2025-09-13 01:01:30.719987 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-09-13 01:01:30.719998 | orchestrator | Saturday 13 September 2025 01:00:43 +0000 (0:00:00.640) 0:01:44.810 **** 2025-09-13 01:01:30.720009 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:01:30.720020 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:01:30.720045 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:01:30.720055 | orchestrator | 2025-09-13 01:01:30.720066 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-09-13 01:01:30.720077 | orchestrator | Saturday 13 September 2025 01:00:43 +0000 (0:00:00.689) 0:01:45.499 **** 2025-09-13 01:01:30.720094 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:01:30.720112 | orchestrator | 2025-09-13 01:01:30.720131 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-09-13 01:01:30.720148 | orchestrator | Saturday 13 September 2025 01:00:45 +0000 (0:00:01.744) 0:01:47.244 **** 2025-09-13 01:01:30.720167 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-09-13 01:01:30.720185 | orchestrator | 2025-09-13 01:01:30.720204 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-09-13 01:01:30.720225 | orchestrator | Saturday 13 September 2025 01:00:55 +0000 (0:00:10.314) 0:01:57.558 **** 2025-09-13 01:01:30.720243 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-09-13 01:01:30.720261 | orchestrator | 2025-09-13 01:01:30.720279 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-09-13 01:01:30.720299 | orchestrator | Saturday 13 September 2025 01:01:16 +0000 (0:00:20.812) 0:02:18.371 **** 2025-09-13 01:01:30.720326 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-09-13 01:01:30.720346 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-09-13 01:01:30.720363 | orchestrator | 2025-09-13 01:01:30.720380 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-09-13 01:01:30.720396 | orchestrator | Saturday 13 September 2025 01:01:23 +0000 (0:00:06.489) 0:02:24.861 **** 2025-09-13 01:01:30.720416 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:01:30.720438 | orchestrator | 2025-09-13 01:01:30.720454 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-09-13 01:01:30.720470 | orchestrator | Saturday 13 September 2025 01:01:23 +0000 (0:00:00.194) 0:02:25.056 **** 2025-09-13 01:01:30.720486 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:01:30.720503 | orchestrator | 2025-09-13 01:01:30.720513 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-09-13 01:01:30.720523 | orchestrator | Saturday 13 September 2025 01:01:23 +0000 (0:00:00.357) 0:02:25.413 **** 2025-09-13 01:01:30.720532 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:01:30.720542 | orchestrator | 2025-09-13 01:01:30.720551 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-09-13 01:01:30.720561 | orchestrator | Saturday 13 September 2025 01:01:24 +0000 (0:00:00.280) 0:02:25.693 **** 2025-09-13 01:01:30.720571 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:01:30.720580 | orchestrator | 2025-09-13 01:01:30.720590 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-09-13 01:01:30.720599 | orchestrator | Saturday 13 September 2025 01:01:25 +0000 (0:00:01.310) 0:02:27.004 **** 2025-09-13 01:01:30.720609 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:01:30.720618 | orchestrator | 2025-09-13 01:01:30.720628 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-13 01:01:30.720638 | orchestrator | Saturday 13 September 2025 01:01:28 +0000 (0:00:03.096) 0:02:30.101 **** 2025-09-13 01:01:30.720647 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:01:30.720657 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:01:30.720666 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:01:30.720676 | orchestrator | 2025-09-13 01:01:30.720685 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 01:01:30.720695 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-09-13 01:01:30.720706 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-13 01:01:30.720733 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-13 01:01:30.720743 | orchestrator | 2025-09-13 01:01:30.720753 | orchestrator | 2025-09-13 01:01:30.720763 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 01:01:30.720773 | orchestrator | Saturday 13 September 2025 01:01:29 +0000 (0:00:00.698) 0:02:30.800 **** 2025-09-13 01:01:30.720810 | orchestrator | =============================================================================== 2025-09-13 01:01:30.720822 | orchestrator | service-ks-register : keystone | Creating services --------------------- 20.81s 2025-09-13 01:01:30.720832 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 20.29s 2025-09-13 01:01:30.720841 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.64s 2025-09-13 01:01:30.720851 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.31s 2025-09-13 01:01:30.720860 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.03s 2025-09-13 01:01:30.720870 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.55s 2025-09-13 01:01:30.720880 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.39s 2025-09-13 01:01:30.720889 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.49s 2025-09-13 01:01:30.720899 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 5.14s 2025-09-13 01:01:30.720909 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.57s 2025-09-13 01:01:30.720918 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.37s 2025-09-13 01:01:30.720928 | orchestrator | keystone : Creating default user role ----------------------------------- 3.10s 2025-09-13 01:01:30.720938 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.05s 2025-09-13 01:01:30.720948 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.69s 2025-09-13 01:01:30.720957 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.21s 2025-09-13 01:01:30.720967 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.17s 2025-09-13 01:01:30.720976 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.17s 2025-09-13 01:01:30.720986 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.13s 2025-09-13 01:01:30.720996 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.84s 2025-09-13 01:01:30.721006 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.74s 2025-09-13 01:01:30.721015 | orchestrator | 2025-09-13 01:01:30 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:01:30.721025 | orchestrator | 2025-09-13 01:01:30 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:01:33.743620 | orchestrator | 2025-09-13 01:01:33 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:01:33.744728 | orchestrator | 2025-09-13 01:01:33 | INFO  | Task e5f17f68-d3e2-4148-a75b-f9c95536ba92 is in state STARTED 2025-09-13 01:01:33.745674 | orchestrator | 2025-09-13 01:01:33 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:01:33.747484 | orchestrator | 2025-09-13 01:01:33 | INFO  | Task a3f88062-ef98-4fe1-b807-934799561ecc is in state STARTED 2025-09-13 01:01:33.749104 | orchestrator | 2025-09-13 01:01:33 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:01:33.749136 | orchestrator | 2025-09-13 01:01:33 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:01:36.785271 | orchestrator | 2025-09-13 01:01:36 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:01:36.785773 | orchestrator | 2025-09-13 01:01:36 | INFO  | Task e5f17f68-d3e2-4148-a75b-f9c95536ba92 is in state STARTED 2025-09-13 01:01:36.787065 | orchestrator | 2025-09-13 01:01:36 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:01:36.787645 | orchestrator | 2025-09-13 01:01:36 | INFO  | Task a3f88062-ef98-4fe1-b807-934799561ecc is in state STARTED 2025-09-13 01:01:36.788626 | orchestrator | 2025-09-13 01:01:36 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:01:36.788828 | orchestrator | 2025-09-13 01:01:36 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:01:39.824149 | orchestrator | 2025-09-13 01:01:39 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:01:39.826055 | orchestrator | 2025-09-13 01:01:39 | INFO  | Task e5f17f68-d3e2-4148-a75b-f9c95536ba92 is in state STARTED 2025-09-13 01:01:39.827587 | orchestrator | 2025-09-13 01:01:39 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:01:39.829478 | orchestrator | 2025-09-13 01:01:39 | INFO  | Task a3f88062-ef98-4fe1-b807-934799561ecc is in state STARTED 2025-09-13 01:01:39.831453 | orchestrator | 2025-09-13 01:01:39 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:01:39.831737 | orchestrator | 2025-09-13 01:01:39 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:01:42.866906 | orchestrator | 2025-09-13 01:01:42 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:01:42.867528 | orchestrator | 2025-09-13 01:01:42 | INFO  | Task e5f17f68-d3e2-4148-a75b-f9c95536ba92 is in state STARTED 2025-09-13 01:01:42.868600 | orchestrator | 2025-09-13 01:01:42 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:01:42.870969 | orchestrator | 2025-09-13 01:01:42 | INFO  | Task a3f88062-ef98-4fe1-b807-934799561ecc is in state STARTED 2025-09-13 01:01:42.873177 | orchestrator | 2025-09-13 01:01:42 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:01:42.873205 | orchestrator | 2025-09-13 01:01:42 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:01:45.910865 | orchestrator | 2025-09-13 01:01:45 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:01:45.910993 | orchestrator | 2025-09-13 01:01:45 | INFO  | Task e5f17f68-d3e2-4148-a75b-f9c95536ba92 is in state STARTED 2025-09-13 01:01:45.911638 | orchestrator | 2025-09-13 01:01:45 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:01:45.913616 | orchestrator | 2025-09-13 01:01:45 | INFO  | Task a3f88062-ef98-4fe1-b807-934799561ecc is in state STARTED 2025-09-13 01:01:45.913999 | orchestrator | 2025-09-13 01:01:45 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:01:45.914121 | orchestrator | 2025-09-13 01:01:45 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:01:48.939441 | orchestrator | 2025-09-13 01:01:48 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:01:48.939546 | orchestrator | 2025-09-13 01:01:48 | INFO  | Task e5f17f68-d3e2-4148-a75b-f9c95536ba92 is in state STARTED 2025-09-13 01:01:48.940289 | orchestrator | 2025-09-13 01:01:48 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:01:48.940942 | orchestrator | 2025-09-13 01:01:48 | INFO  | Task a3f88062-ef98-4fe1-b807-934799561ecc is in state STARTED 2025-09-13 01:01:48.942211 | orchestrator | 2025-09-13 01:01:48 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:01:48.942256 | orchestrator | 2025-09-13 01:01:48 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:01:51.969075 | orchestrator | 2025-09-13 01:01:51 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:01:51.969462 | orchestrator | 2025-09-13 01:01:51 | INFO  | Task e5f17f68-d3e2-4148-a75b-f9c95536ba92 is in state STARTED 2025-09-13 01:01:51.970150 | orchestrator | 2025-09-13 01:01:51 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:01:51.970959 | orchestrator | 2025-09-13 01:01:51 | INFO  | Task a3f88062-ef98-4fe1-b807-934799561ecc is in state STARTED 2025-09-13 01:01:51.971657 | orchestrator | 2025-09-13 01:01:51 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:01:51.971681 | orchestrator | 2025-09-13 01:01:51 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:01:55.009297 | orchestrator | 2025-09-13 01:01:55 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:01:55.009402 | orchestrator | 2025-09-13 01:01:55 | INFO  | Task e5f17f68-d3e2-4148-a75b-f9c95536ba92 is in state STARTED 2025-09-13 01:01:55.009428 | orchestrator | 2025-09-13 01:01:55 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:01:55.009892 | orchestrator | 2025-09-13 01:01:55 | INFO  | Task a3f88062-ef98-4fe1-b807-934799561ecc is in state STARTED 2025-09-13 01:01:55.010472 | orchestrator | 2025-09-13 01:01:55 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:01:55.010496 | orchestrator | 2025-09-13 01:01:55 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:01:58.046510 | orchestrator | 2025-09-13 01:01:58 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:01:58.047300 | orchestrator | 2025-09-13 01:01:58 | INFO  | Task e5f17f68-d3e2-4148-a75b-f9c95536ba92 is in state STARTED 2025-09-13 01:01:58.049607 | orchestrator | 2025-09-13 01:01:58 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:01:58.051718 | orchestrator | 2025-09-13 01:01:58 | INFO  | Task a3f88062-ef98-4fe1-b807-934799561ecc is in state STARTED 2025-09-13 01:01:58.053385 | orchestrator | 2025-09-13 01:01:58 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:01:58.053408 | orchestrator | 2025-09-13 01:01:58 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:02:01.090256 | orchestrator | 2025-09-13 01:02:01 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:02:01.090927 | orchestrator | 2025-09-13 01:02:01 | INFO  | Task e5f17f68-d3e2-4148-a75b-f9c95536ba92 is in state SUCCESS 2025-09-13 01:02:01.091701 | orchestrator | 2025-09-13 01:02:01 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:02:01.092627 | orchestrator | 2025-09-13 01:02:01 | INFO  | Task a3f88062-ef98-4fe1-b807-934799561ecc is in state STARTED 2025-09-13 01:02:01.093492 | orchestrator | 2025-09-13 01:02:01 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:02:01.094050 | orchestrator | 2025-09-13 01:02:01 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:02:04.122602 | orchestrator | 2025-09-13 01:02:04 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:02:04.122728 | orchestrator | 2025-09-13 01:02:04 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:02:04.123213 | orchestrator | 2025-09-13 01:02:04 | INFO  | Task a3f88062-ef98-4fe1-b807-934799561ecc is in state STARTED 2025-09-13 01:02:04.123712 | orchestrator | 2025-09-13 01:02:04 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:02:04.125254 | orchestrator | 2025-09-13 01:02:04 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:02:04.125300 | orchestrator | 2025-09-13 01:02:04 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:02:07.151630 | orchestrator | 2025-09-13 01:02:07 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:02:07.151735 | orchestrator | 2025-09-13 01:02:07 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:02:07.152044 | orchestrator | 2025-09-13 01:02:07 | INFO  | Task a3f88062-ef98-4fe1-b807-934799561ecc is in state STARTED 2025-09-13 01:02:07.153390 | orchestrator | 2025-09-13 01:02:07 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:02:07.154256 | orchestrator | 2025-09-13 01:02:07 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:02:07.154357 | orchestrator | 2025-09-13 01:02:07 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:02:10.188051 | orchestrator | 2025-09-13 01:02:10 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:02:10.188353 | orchestrator | 2025-09-13 01:02:10 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:02:10.189001 | orchestrator | 2025-09-13 01:02:10 | INFO  | Task a3f88062-ef98-4fe1-b807-934799561ecc is in state STARTED 2025-09-13 01:02:10.189641 | orchestrator | 2025-09-13 01:02:10 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:02:10.190427 | orchestrator | 2025-09-13 01:02:10 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:02:10.191073 | orchestrator | 2025-09-13 01:02:10 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:02:13.214659 | orchestrator | 2025-09-13 01:02:13 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:02:13.214948 | orchestrator | 2025-09-13 01:02:13 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:02:13.215963 | orchestrator | 2025-09-13 01:02:13 | INFO  | Task a3f88062-ef98-4fe1-b807-934799561ecc is in state STARTED 2025-09-13 01:02:13.216412 | orchestrator | 2025-09-13 01:02:13 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:02:13.217194 | orchestrator | 2025-09-13 01:02:13 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:02:13.217217 | orchestrator | 2025-09-13 01:02:13 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:02:16.255393 | orchestrator | 2025-09-13 01:02:16 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:02:16.255590 | orchestrator | 2025-09-13 01:02:16 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:02:16.256835 | orchestrator | 2025-09-13 01:02:16 | INFO  | Task a3f88062-ef98-4fe1-b807-934799561ecc is in state STARTED 2025-09-13 01:02:16.257106 | orchestrator | 2025-09-13 01:02:16 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:02:16.259520 | orchestrator | 2025-09-13 01:02:16 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:02:16.259544 | orchestrator | 2025-09-13 01:02:16 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:02:19.285211 | orchestrator | 2025-09-13 01:02:19 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:02:19.285939 | orchestrator | 2025-09-13 01:02:19 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:02:19.287305 | orchestrator | 2025-09-13 01:02:19 | INFO  | Task a3f88062-ef98-4fe1-b807-934799561ecc is in state STARTED 2025-09-13 01:02:19.287784 | orchestrator | 2025-09-13 01:02:19 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:02:19.289461 | orchestrator | 2025-09-13 01:02:19 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:02:19.289483 | orchestrator | 2025-09-13 01:02:19 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:02:22.321077 | orchestrator | 2025-09-13 01:02:22 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:02:22.323057 | orchestrator | 2025-09-13 01:02:22 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:02:22.323455 | orchestrator | 2025-09-13 01:02:22 | INFO  | Task a3f88062-ef98-4fe1-b807-934799561ecc is in state STARTED 2025-09-13 01:02:22.324359 | orchestrator | 2025-09-13 01:02:22 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:02:22.325224 | orchestrator | 2025-09-13 01:02:22 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:02:22.325247 | orchestrator | 2025-09-13 01:02:22 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:02:25.352898 | orchestrator | 2025-09-13 01:02:25 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:02:25.353357 | orchestrator | 2025-09-13 01:02:25 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:02:25.355290 | orchestrator | 2025-09-13 01:02:25 | INFO  | Task a3f88062-ef98-4fe1-b807-934799561ecc is in state STARTED 2025-09-13 01:02:25.356266 | orchestrator | 2025-09-13 01:02:25 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:02:25.357724 | orchestrator | 2025-09-13 01:02:25 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:02:25.357752 | orchestrator | 2025-09-13 01:02:25 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:02:28.382284 | orchestrator | 2025-09-13 01:02:28 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:02:28.382495 | orchestrator | 2025-09-13 01:02:28 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:02:28.383224 | orchestrator | 2025-09-13 01:02:28 | INFO  | Task a3f88062-ef98-4fe1-b807-934799561ecc is in state STARTED 2025-09-13 01:02:28.383900 | orchestrator | 2025-09-13 01:02:28 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:02:28.385362 | orchestrator | 2025-09-13 01:02:28 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:02:28.385384 | orchestrator | 2025-09-13 01:02:28 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:02:31.409163 | orchestrator | 2025-09-13 01:02:31 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:02:31.409362 | orchestrator | 2025-09-13 01:02:31 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:02:31.409539 | orchestrator | 2025-09-13 01:02:31 | INFO  | Task a3f88062-ef98-4fe1-b807-934799561ecc is in state STARTED 2025-09-13 01:02:31.410210 | orchestrator | 2025-09-13 01:02:31 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:02:31.410702 | orchestrator | 2025-09-13 01:02:31 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:02:31.410725 | orchestrator | 2025-09-13 01:02:31 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:02:34.438450 | orchestrator | 2025-09-13 01:02:34 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:02:34.440218 | orchestrator | 2025-09-13 01:02:34 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:02:34.441260 | orchestrator | 2025-09-13 01:02:34 | INFO  | Task a3f88062-ef98-4fe1-b807-934799561ecc is in state STARTED 2025-09-13 01:02:34.442703 | orchestrator | 2025-09-13 01:02:34 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:02:34.446278 | orchestrator | 2025-09-13 01:02:34 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:02:34.446874 | orchestrator | 2025-09-13 01:02:34 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:02:37.471931 | orchestrator | 2025-09-13 01:02:37 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:02:37.477235 | orchestrator | 2025-09-13 01:02:37 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:02:37.477548 | orchestrator | 2025-09-13 01:02:37 | INFO  | Task a3f88062-ef98-4fe1-b807-934799561ecc is in state SUCCESS 2025-09-13 01:02:37.477967 | orchestrator | 2025-09-13 01:02:37.477994 | orchestrator | 2025-09-13 01:02:37.478006 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-13 01:02:37.478062 | orchestrator | 2025-09-13 01:02:37.478075 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-13 01:02:37.478087 | orchestrator | Saturday 13 September 2025 01:01:27 +0000 (0:00:00.299) 0:00:00.299 **** 2025-09-13 01:02:37.478099 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:02:37.478111 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:02:37.478122 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:02:37.478133 | orchestrator | ok: [testbed-manager] 2025-09-13 01:02:37.478145 | orchestrator | ok: [testbed-node-3] 2025-09-13 01:02:37.478156 | orchestrator | ok: [testbed-node-4] 2025-09-13 01:02:37.478166 | orchestrator | ok: [testbed-node-5] 2025-09-13 01:02:37.478245 | orchestrator | 2025-09-13 01:02:37.478261 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-13 01:02:37.478273 | orchestrator | Saturday 13 September 2025 01:01:29 +0000 (0:00:01.099) 0:00:01.399 **** 2025-09-13 01:02:37.478284 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-09-13 01:02:37.478296 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-09-13 01:02:37.478307 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-09-13 01:02:37.478319 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-09-13 01:02:37.478330 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-09-13 01:02:37.478341 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-09-13 01:02:37.478352 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-09-13 01:02:37.478363 | orchestrator | 2025-09-13 01:02:37.478374 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-13 01:02:37.478386 | orchestrator | 2025-09-13 01:02:37.478397 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-09-13 01:02:37.478408 | orchestrator | Saturday 13 September 2025 01:01:30 +0000 (0:00:01.857) 0:00:03.257 **** 2025-09-13 01:02:37.478437 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 01:02:37.478450 | orchestrator | 2025-09-13 01:02:37.478462 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-09-13 01:02:37.478473 | orchestrator | Saturday 13 September 2025 01:01:32 +0000 (0:00:01.572) 0:00:04.830 **** 2025-09-13 01:02:37.478484 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-09-13 01:02:37.478495 | orchestrator | 2025-09-13 01:02:37.478506 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-09-13 01:02:37.478540 | orchestrator | Saturday 13 September 2025 01:01:35 +0000 (0:00:03.219) 0:00:08.050 **** 2025-09-13 01:02:37.478553 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-09-13 01:02:37.478565 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-09-13 01:02:37.478576 | orchestrator | 2025-09-13 01:02:37.478587 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-09-13 01:02:37.478598 | orchestrator | Saturday 13 September 2025 01:01:41 +0000 (0:00:05.891) 0:00:13.941 **** 2025-09-13 01:02:37.478609 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-13 01:02:37.478619 | orchestrator | 2025-09-13 01:02:37.478630 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-09-13 01:02:37.478641 | orchestrator | Saturday 13 September 2025 01:01:44 +0000 (0:00:02.800) 0:00:16.742 **** 2025-09-13 01:02:37.478652 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-13 01:02:37.478662 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-09-13 01:02:37.478673 | orchestrator | 2025-09-13 01:02:37.478684 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-09-13 01:02:37.478694 | orchestrator | Saturday 13 September 2025 01:01:48 +0000 (0:00:03.988) 0:00:20.730 **** 2025-09-13 01:02:37.478705 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-13 01:02:37.478716 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-09-13 01:02:37.478727 | orchestrator | 2025-09-13 01:02:37.478738 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-09-13 01:02:37.478749 | orchestrator | Saturday 13 September 2025 01:01:55 +0000 (0:00:06.953) 0:00:27.684 **** 2025-09-13 01:02:37.478759 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-09-13 01:02:37.478770 | orchestrator | 2025-09-13 01:02:37.478780 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 01:02:37.478791 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 01:02:37.478824 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 01:02:37.478836 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 01:02:37.478847 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 01:02:37.478860 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 01:02:37.478978 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 01:02:37.479058 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 01:02:37.479074 | orchestrator | 2025-09-13 01:02:37.479087 | orchestrator | 2025-09-13 01:02:37.479100 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 01:02:37.479113 | orchestrator | Saturday 13 September 2025 01:01:59 +0000 (0:00:04.548) 0:00:32.232 **** 2025-09-13 01:02:37.479126 | orchestrator | =============================================================================== 2025-09-13 01:02:37.479138 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.95s 2025-09-13 01:02:37.479151 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.89s 2025-09-13 01:02:37.479164 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.55s 2025-09-13 01:02:37.479176 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.99s 2025-09-13 01:02:37.479198 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.22s 2025-09-13 01:02:37.479211 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.80s 2025-09-13 01:02:37.479224 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.86s 2025-09-13 01:02:37.479237 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.57s 2025-09-13 01:02:37.479248 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.10s 2025-09-13 01:02:37.479259 | orchestrator | 2025-09-13 01:02:37.479269 | orchestrator | 2025-09-13 01:02:37.479280 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-09-13 01:02:37.479291 | orchestrator | 2025-09-13 01:02:37.479302 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-09-13 01:02:37.479313 | orchestrator | Saturday 13 September 2025 01:01:19 +0000 (0:00:00.257) 0:00:00.257 **** 2025-09-13 01:02:37.479324 | orchestrator | changed: [testbed-manager] 2025-09-13 01:02:37.479335 | orchestrator | 2025-09-13 01:02:37.479353 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-09-13 01:02:37.479364 | orchestrator | Saturday 13 September 2025 01:01:20 +0000 (0:00:01.433) 0:00:01.690 **** 2025-09-13 01:02:37.479375 | orchestrator | changed: [testbed-manager] 2025-09-13 01:02:37.479386 | orchestrator | 2025-09-13 01:02:37.479397 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-09-13 01:02:37.479408 | orchestrator | Saturday 13 September 2025 01:01:22 +0000 (0:00:01.117) 0:00:02.808 **** 2025-09-13 01:02:37.479419 | orchestrator | changed: [testbed-manager] 2025-09-13 01:02:37.479430 | orchestrator | 2025-09-13 01:02:37.479441 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-09-13 01:02:37.479451 | orchestrator | Saturday 13 September 2025 01:01:23 +0000 (0:00:01.042) 0:00:03.850 **** 2025-09-13 01:02:37.479463 | orchestrator | changed: [testbed-manager] 2025-09-13 01:02:37.479473 | orchestrator | 2025-09-13 01:02:37.479484 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-09-13 01:02:37.479495 | orchestrator | Saturday 13 September 2025 01:01:24 +0000 (0:00:01.291) 0:00:05.141 **** 2025-09-13 01:02:37.479506 | orchestrator | changed: [testbed-manager] 2025-09-13 01:02:37.479517 | orchestrator | 2025-09-13 01:02:37.479528 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-09-13 01:02:37.479539 | orchestrator | Saturday 13 September 2025 01:01:25 +0000 (0:00:01.473) 0:00:06.615 **** 2025-09-13 01:02:37.479550 | orchestrator | changed: [testbed-manager] 2025-09-13 01:02:37.479561 | orchestrator | 2025-09-13 01:02:37.479572 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-09-13 01:02:37.479583 | orchestrator | Saturday 13 September 2025 01:01:26 +0000 (0:00:01.070) 0:00:07.685 **** 2025-09-13 01:02:37.479594 | orchestrator | changed: [testbed-manager] 2025-09-13 01:02:37.479605 | orchestrator | 2025-09-13 01:02:37.479616 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-09-13 01:02:37.479627 | orchestrator | Saturday 13 September 2025 01:01:28 +0000 (0:00:02.029) 0:00:09.715 **** 2025-09-13 01:02:37.479638 | orchestrator | changed: [testbed-manager] 2025-09-13 01:02:37.479648 | orchestrator | 2025-09-13 01:02:37.479659 | orchestrator | TASK [Create admin user] ******************************************************* 2025-09-13 01:02:37.479671 | orchestrator | Saturday 13 September 2025 01:01:29 +0000 (0:00:00.975) 0:00:10.691 **** 2025-09-13 01:02:37.479681 | orchestrator | changed: [testbed-manager] 2025-09-13 01:02:37.479692 | orchestrator | 2025-09-13 01:02:37.479703 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-09-13 01:02:37.479714 | orchestrator | Saturday 13 September 2025 01:02:12 +0000 (0:00:42.463) 0:00:53.154 **** 2025-09-13 01:02:37.479725 | orchestrator | skipping: [testbed-manager] 2025-09-13 01:02:37.479736 | orchestrator | 2025-09-13 01:02:37.479747 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-13 01:02:37.479764 | orchestrator | 2025-09-13 01:02:37.479775 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-13 01:02:37.479786 | orchestrator | Saturday 13 September 2025 01:02:12 +0000 (0:00:00.136) 0:00:53.290 **** 2025-09-13 01:02:37.479816 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:02:37.479828 | orchestrator | 2025-09-13 01:02:37.479839 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-13 01:02:37.479850 | orchestrator | 2025-09-13 01:02:37.479861 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-13 01:02:37.479872 | orchestrator | Saturday 13 September 2025 01:02:24 +0000 (0:00:11.676) 0:01:04.966 **** 2025-09-13 01:02:37.479883 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:02:37.479894 | orchestrator | 2025-09-13 01:02:37.479905 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-13 01:02:37.479916 | orchestrator | 2025-09-13 01:02:37.479926 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-13 01:02:37.479937 | orchestrator | Saturday 13 September 2025 01:02:35 +0000 (0:00:11.284) 0:01:16.251 **** 2025-09-13 01:02:37.479948 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:02:37.479959 | orchestrator | 2025-09-13 01:02:37.479978 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 01:02:37.479989 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-13 01:02:37.480000 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 01:02:37.480011 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 01:02:37.480022 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 01:02:37.480033 | orchestrator | 2025-09-13 01:02:37.480044 | orchestrator | 2025-09-13 01:02:37.480055 | orchestrator | 2025-09-13 01:02:37.480066 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 01:02:37.480077 | orchestrator | Saturday 13 September 2025 01:02:36 +0000 (0:00:01.159) 0:01:17.411 **** 2025-09-13 01:02:37.480088 | orchestrator | =============================================================================== 2025-09-13 01:02:37.480099 | orchestrator | Create admin user ------------------------------------------------------ 42.46s 2025-09-13 01:02:37.480110 | orchestrator | Restart ceph manager service ------------------------------------------- 24.12s 2025-09-13 01:02:37.480120 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.03s 2025-09-13 01:02:37.480131 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.47s 2025-09-13 01:02:37.480142 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.43s 2025-09-13 01:02:37.480153 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.29s 2025-09-13 01:02:37.480169 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.12s 2025-09-13 01:02:37.480180 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.07s 2025-09-13 01:02:37.480193 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.04s 2025-09-13 01:02:37.480213 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 0.98s 2025-09-13 01:02:37.480225 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.14s 2025-09-13 01:02:37.480235 | orchestrator | 2025-09-13 01:02:37 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:02:37.480247 | orchestrator | 2025-09-13 01:02:37 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:02:37.480258 | orchestrator | 2025-09-13 01:02:37 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:02:40.617285 | orchestrator | 2025-09-13 01:02:40 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:02:40.617385 | orchestrator | 2025-09-13 01:02:40 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:02:40.617400 | orchestrator | 2025-09-13 01:02:40 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:02:40.617412 | orchestrator | 2025-09-13 01:02:40 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:02:40.617423 | orchestrator | 2025-09-13 01:02:40 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:02:43.524389 | orchestrator | 2025-09-13 01:02:43 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:02:43.526200 | orchestrator | 2025-09-13 01:02:43 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:02:43.526236 | orchestrator | 2025-09-13 01:02:43 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:02:43.527525 | orchestrator | 2025-09-13 01:02:43 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:02:43.527564 | orchestrator | 2025-09-13 01:02:43 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:02:46.551947 | orchestrator | 2025-09-13 01:02:46 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:02:46.552160 | orchestrator | 2025-09-13 01:02:46 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:02:46.552668 | orchestrator | 2025-09-13 01:02:46 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:02:46.553390 | orchestrator | 2025-09-13 01:02:46 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:02:46.553413 | orchestrator | 2025-09-13 01:02:46 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:02:49.578309 | orchestrator | 2025-09-13 01:02:49 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:02:49.580939 | orchestrator | 2025-09-13 01:02:49 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:02:49.581349 | orchestrator | 2025-09-13 01:02:49 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:02:49.582183 | orchestrator | 2025-09-13 01:02:49 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:02:49.582234 | orchestrator | 2025-09-13 01:02:49 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:02:52.602211 | orchestrator | 2025-09-13 01:02:52 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:02:52.602856 | orchestrator | 2025-09-13 01:02:52 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:02:52.604330 | orchestrator | 2025-09-13 01:02:52 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:02:52.604933 | orchestrator | 2025-09-13 01:02:52 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:02:52.604956 | orchestrator | 2025-09-13 01:02:52 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:02:55.624291 | orchestrator | 2025-09-13 01:02:55 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:02:55.625004 | orchestrator | 2025-09-13 01:02:55 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:02:55.625198 | orchestrator | 2025-09-13 01:02:55 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:02:55.626881 | orchestrator | 2025-09-13 01:02:55 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:02:55.626922 | orchestrator | 2025-09-13 01:02:55 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:02:58.654460 | orchestrator | 2025-09-13 01:02:58 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:02:58.655683 | orchestrator | 2025-09-13 01:02:58 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:02:58.656365 | orchestrator | 2025-09-13 01:02:58 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:02:58.657637 | orchestrator | 2025-09-13 01:02:58 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:02:58.657658 | orchestrator | 2025-09-13 01:02:58 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:03:01.744284 | orchestrator | 2025-09-13 01:03:01 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:03:01.744987 | orchestrator | 2025-09-13 01:03:01 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:03:01.745931 | orchestrator | 2025-09-13 01:03:01 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:03:01.746749 | orchestrator | 2025-09-13 01:03:01 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:03:01.746978 | orchestrator | 2025-09-13 01:03:01 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:03:04.787756 | orchestrator | 2025-09-13 01:03:04 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:03:04.788020 | orchestrator | 2025-09-13 01:03:04 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:03:04.788719 | orchestrator | 2025-09-13 01:03:04 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:03:04.789526 | orchestrator | 2025-09-13 01:03:04 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:03:04.789550 | orchestrator | 2025-09-13 01:03:04 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:03:07.833434 | orchestrator | 2025-09-13 01:03:07 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:03:07.836692 | orchestrator | 2025-09-13 01:03:07 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:03:07.840849 | orchestrator | 2025-09-13 01:03:07 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:03:07.843185 | orchestrator | 2025-09-13 01:03:07 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:03:07.843225 | orchestrator | 2025-09-13 01:03:07 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:03:10.890780 | orchestrator | 2025-09-13 01:03:10 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:03:10.892390 | orchestrator | 2025-09-13 01:03:10 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:03:10.894292 | orchestrator | 2025-09-13 01:03:10 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:03:10.897474 | orchestrator | 2025-09-13 01:03:10 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:03:10.897501 | orchestrator | 2025-09-13 01:03:10 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:03:13.946520 | orchestrator | 2025-09-13 01:03:13 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:03:13.948601 | orchestrator | 2025-09-13 01:03:13 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:03:13.950760 | orchestrator | 2025-09-13 01:03:13 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:03:13.954500 | orchestrator | 2025-09-13 01:03:13 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:03:13.954702 | orchestrator | 2025-09-13 01:03:13 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:03:16.993571 | orchestrator | 2025-09-13 01:03:16 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:03:16.996115 | orchestrator | 2025-09-13 01:03:16 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:03:17.002323 | orchestrator | 2025-09-13 01:03:17 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:03:17.010860 | orchestrator | 2025-09-13 01:03:17 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:03:17.010889 | orchestrator | 2025-09-13 01:03:17 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:03:20.093534 | orchestrator | 2025-09-13 01:03:20 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:03:20.093640 | orchestrator | 2025-09-13 01:03:20 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:03:20.093656 | orchestrator | 2025-09-13 01:03:20 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:03:20.093669 | orchestrator | 2025-09-13 01:03:20 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:03:20.093681 | orchestrator | 2025-09-13 01:03:20 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:03:23.224507 | orchestrator | 2025-09-13 01:03:23 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:03:23.225889 | orchestrator | 2025-09-13 01:03:23 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:03:23.229708 | orchestrator | 2025-09-13 01:03:23 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:03:23.231287 | orchestrator | 2025-09-13 01:03:23 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:03:23.231310 | orchestrator | 2025-09-13 01:03:23 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:03:26.343594 | orchestrator | 2025-09-13 01:03:26 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:03:26.344028 | orchestrator | 2025-09-13 01:03:26 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:03:26.344728 | orchestrator | 2025-09-13 01:03:26 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:03:26.345777 | orchestrator | 2025-09-13 01:03:26 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:03:26.345803 | orchestrator | 2025-09-13 01:03:26 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:03:29.391071 | orchestrator | 2025-09-13 01:03:29 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:03:29.393678 | orchestrator | 2025-09-13 01:03:29 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:03:29.395603 | orchestrator | 2025-09-13 01:03:29 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:03:29.397361 | orchestrator | 2025-09-13 01:03:29 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:03:29.397385 | orchestrator | 2025-09-13 01:03:29 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:03:32.429874 | orchestrator | 2025-09-13 01:03:32 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:03:32.431909 | orchestrator | 2025-09-13 01:03:32 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:03:32.433936 | orchestrator | 2025-09-13 01:03:32 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:03:32.435549 | orchestrator | 2025-09-13 01:03:32 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:03:32.435670 | orchestrator | 2025-09-13 01:03:32 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:03:35.482954 | orchestrator | 2025-09-13 01:03:35 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:03:35.490911 | orchestrator | 2025-09-13 01:03:35 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:03:35.491501 | orchestrator | 2025-09-13 01:03:35 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:03:35.494585 | orchestrator | 2025-09-13 01:03:35 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:03:35.494612 | orchestrator | 2025-09-13 01:03:35 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:03:38.555362 | orchestrator | 2025-09-13 01:03:38 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:03:38.556747 | orchestrator | 2025-09-13 01:03:38 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:03:38.560419 | orchestrator | 2025-09-13 01:03:38 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:03:38.562115 | orchestrator | 2025-09-13 01:03:38 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:03:38.563326 | orchestrator | 2025-09-13 01:03:38 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:03:41.598966 | orchestrator | 2025-09-13 01:03:41 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:03:41.600158 | orchestrator | 2025-09-13 01:03:41 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:03:41.600462 | orchestrator | 2025-09-13 01:03:41 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:03:41.601489 | orchestrator | 2025-09-13 01:03:41 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:03:41.601514 | orchestrator | 2025-09-13 01:03:41 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:03:44.651795 | orchestrator | 2025-09-13 01:03:44 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:03:44.653378 | orchestrator | 2025-09-13 01:03:44 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:03:44.654343 | orchestrator | 2025-09-13 01:03:44 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:03:44.655047 | orchestrator | 2025-09-13 01:03:44 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:03:44.655253 | orchestrator | 2025-09-13 01:03:44 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:03:47.691904 | orchestrator | 2025-09-13 01:03:47 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:03:47.693289 | orchestrator | 2025-09-13 01:03:47 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:03:47.693589 | orchestrator | 2025-09-13 01:03:47 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:03:47.694845 | orchestrator | 2025-09-13 01:03:47 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:03:47.694898 | orchestrator | 2025-09-13 01:03:47 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:03:50.730106 | orchestrator | 2025-09-13 01:03:50 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:03:50.730211 | orchestrator | 2025-09-13 01:03:50 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:03:50.730227 | orchestrator | 2025-09-13 01:03:50 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:03:50.730239 | orchestrator | 2025-09-13 01:03:50 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:03:50.730251 | orchestrator | 2025-09-13 01:03:50 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:03:53.743724 | orchestrator | 2025-09-13 01:03:53 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:03:53.744030 | orchestrator | 2025-09-13 01:03:53 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:03:53.744721 | orchestrator | 2025-09-13 01:03:53 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:03:53.745490 | orchestrator | 2025-09-13 01:03:53 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:03:53.745521 | orchestrator | 2025-09-13 01:03:53 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:03:56.776710 | orchestrator | 2025-09-13 01:03:56 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:03:56.777315 | orchestrator | 2025-09-13 01:03:56 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:03:56.777686 | orchestrator | 2025-09-13 01:03:56 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:03:56.778465 | orchestrator | 2025-09-13 01:03:56 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:03:56.778495 | orchestrator | 2025-09-13 01:03:56 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:03:59.823039 | orchestrator | 2025-09-13 01:03:59 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:03:59.824748 | orchestrator | 2025-09-13 01:03:59 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:03:59.826769 | orchestrator | 2025-09-13 01:03:59 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:03:59.828679 | orchestrator | 2025-09-13 01:03:59 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:03:59.828939 | orchestrator | 2025-09-13 01:03:59 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:04:02.860121 | orchestrator | 2025-09-13 01:04:02 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:04:02.860383 | orchestrator | 2025-09-13 01:04:02 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:04:02.861374 | orchestrator | 2025-09-13 01:04:02 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:04:02.861803 | orchestrator | 2025-09-13 01:04:02 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:04:02.861858 | orchestrator | 2025-09-13 01:04:02 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:04:05.908260 | orchestrator | 2025-09-13 01:04:05 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:04:05.909582 | orchestrator | 2025-09-13 01:04:05 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:04:05.911258 | orchestrator | 2025-09-13 01:04:05 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:04:05.912763 | orchestrator | 2025-09-13 01:04:05 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:04:05.912847 | orchestrator | 2025-09-13 01:04:05 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:04:09.001751 | orchestrator | 2025-09-13 01:04:09 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:04:09.002394 | orchestrator | 2025-09-13 01:04:09 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:04:09.003370 | orchestrator | 2025-09-13 01:04:09 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:04:09.004064 | orchestrator | 2025-09-13 01:04:09 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:04:09.004090 | orchestrator | 2025-09-13 01:04:09 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:04:12.050145 | orchestrator | 2025-09-13 01:04:12 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:04:12.050642 | orchestrator | 2025-09-13 01:04:12 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:04:12.052873 | orchestrator | 2025-09-13 01:04:12 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:04:12.056149 | orchestrator | 2025-09-13 01:04:12 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:04:12.056174 | orchestrator | 2025-09-13 01:04:12 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:04:15.102652 | orchestrator | 2025-09-13 01:04:15 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:04:15.108110 | orchestrator | 2025-09-13 01:04:15 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:04:15.108145 | orchestrator | 2025-09-13 01:04:15 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:04:15.109236 | orchestrator | 2025-09-13 01:04:15 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:04:15.109329 | orchestrator | 2025-09-13 01:04:15 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:04:18.144994 | orchestrator | 2025-09-13 01:04:18 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:04:18.145413 | orchestrator | 2025-09-13 01:04:18 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:04:18.146331 | orchestrator | 2025-09-13 01:04:18 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state STARTED 2025-09-13 01:04:18.148527 | orchestrator | 2025-09-13 01:04:18 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:04:18.148560 | orchestrator | 2025-09-13 01:04:18 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:04:21.201066 | orchestrator | 2025-09-13 01:04:21 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:04:21.204282 | orchestrator | 2025-09-13 01:04:21 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:04:21.208668 | orchestrator | 2025-09-13 01:04:21 | INFO  | Task 6b8416a9-dd48-4818-b4a0-75aef24437e8 is in state SUCCESS 2025-09-13 01:04:21.211462 | orchestrator | 2025-09-13 01:04:21.211498 | orchestrator | 2025-09-13 01:04:21.211510 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-13 01:04:21.211522 | orchestrator | 2025-09-13 01:04:21.211534 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-13 01:04:21.211568 | orchestrator | Saturday 13 September 2025 01:01:27 +0000 (0:00:00.295) 0:00:00.295 **** 2025-09-13 01:04:21.211580 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:04:21.211592 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:04:21.211603 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:04:21.211614 | orchestrator | 2025-09-13 01:04:21.211625 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-13 01:04:21.211686 | orchestrator | Saturday 13 September 2025 01:01:27 +0000 (0:00:00.317) 0:00:00.612 **** 2025-09-13 01:04:21.211701 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-09-13 01:04:21.211713 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-09-13 01:04:21.211724 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-09-13 01:04:21.211734 | orchestrator | 2025-09-13 01:04:21.211745 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-09-13 01:04:21.211756 | orchestrator | 2025-09-13 01:04:21.211767 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-13 01:04:21.211777 | orchestrator | Saturday 13 September 2025 01:01:28 +0000 (0:00:00.413) 0:00:01.026 **** 2025-09-13 01:04:21.211788 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 01:04:21.211800 | orchestrator | 2025-09-13 01:04:21.211811 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-09-13 01:04:21.211851 | orchestrator | Saturday 13 September 2025 01:01:29 +0000 (0:00:01.086) 0:00:02.112 **** 2025-09-13 01:04:21.211862 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-09-13 01:04:21.211873 | orchestrator | 2025-09-13 01:04:21.211884 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-09-13 01:04:21.211895 | orchestrator | Saturday 13 September 2025 01:01:33 +0000 (0:00:03.732) 0:00:05.844 **** 2025-09-13 01:04:21.211905 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-09-13 01:04:21.211916 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-09-13 01:04:21.211927 | orchestrator | 2025-09-13 01:04:21.211938 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-09-13 01:04:21.211950 | orchestrator | Saturday 13 September 2025 01:01:39 +0000 (0:00:06.425) 0:00:12.269 **** 2025-09-13 01:04:21.211961 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-09-13 01:04:21.211972 | orchestrator | 2025-09-13 01:04:21.211983 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-09-13 01:04:21.211994 | orchestrator | Saturday 13 September 2025 01:01:42 +0000 (0:00:03.367) 0:00:15.637 **** 2025-09-13 01:04:21.212005 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-13 01:04:21.212016 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-09-13 01:04:21.212027 | orchestrator | 2025-09-13 01:04:21.212038 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-09-13 01:04:21.212049 | orchestrator | Saturday 13 September 2025 01:01:46 +0000 (0:00:03.950) 0:00:19.587 **** 2025-09-13 01:04:21.212060 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-13 01:04:21.212074 | orchestrator | 2025-09-13 01:04:21.212086 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-09-13 01:04:21.212099 | orchestrator | Saturday 13 September 2025 01:01:50 +0000 (0:00:03.418) 0:00:23.006 **** 2025-09-13 01:04:21.212111 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-09-13 01:04:21.212123 | orchestrator | 2025-09-13 01:04:21.212136 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-09-13 01:04:21.212149 | orchestrator | Saturday 13 September 2025 01:01:55 +0000 (0:00:05.087) 0:00:28.094 **** 2025-09-13 01:04:21.212183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-13 01:04:21.212219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-13 01:04:21.212268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-13 01:04:21.212290 | orchestrator | 2025-09-13 01:04:21.212303 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-13 01:04:21.212316 | orchestrator | Saturday 13 September 2025 01:01:58 +0000 (0:00:03.091) 0:00:31.185 **** 2025-09-13 01:04:21.212328 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 01:04:21.212341 | orchestrator | 2025-09-13 01:04:21.212360 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-09-13 01:04:21.212373 | orchestrator | Saturday 13 September 2025 01:01:59 +0000 (0:00:00.595) 0:00:31.781 **** 2025-09-13 01:04:21.212385 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:04:21.212398 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:04:21.212411 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:04:21.212423 | orchestrator | 2025-09-13 01:04:21.212435 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-09-13 01:04:21.212446 | orchestrator | Saturday 13 September 2025 01:02:02 +0000 (0:00:03.746) 0:00:35.527 **** 2025-09-13 01:04:21.212462 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-13 01:04:21.212474 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-13 01:04:21.212484 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-13 01:04:21.212495 | orchestrator | 2025-09-13 01:04:21.212506 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-09-13 01:04:21.212517 | orchestrator | Saturday 13 September 2025 01:02:04 +0000 (0:00:01.386) 0:00:36.913 **** 2025-09-13 01:04:21.212527 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-13 01:04:21.212538 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-13 01:04:21.212548 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-13 01:04:21.212559 | orchestrator | 2025-09-13 01:04:21.212570 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-09-13 01:04:21.212581 | orchestrator | Saturday 13 September 2025 01:02:05 +0000 (0:00:01.278) 0:00:38.192 **** 2025-09-13 01:04:21.212591 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:04:21.212602 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:04:21.212613 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:04:21.212623 | orchestrator | 2025-09-13 01:04:21.212634 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-09-13 01:04:21.212645 | orchestrator | Saturday 13 September 2025 01:02:06 +0000 (0:00:00.758) 0:00:38.950 **** 2025-09-13 01:04:21.212656 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:04:21.212666 | orchestrator | 2025-09-13 01:04:21.212677 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-09-13 01:04:21.212688 | orchestrator | Saturday 13 September 2025 01:02:06 +0000 (0:00:00.244) 0:00:39.195 **** 2025-09-13 01:04:21.212705 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:04:21.212716 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:04:21.212726 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:04:21.212737 | orchestrator | 2025-09-13 01:04:21.212748 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-13 01:04:21.212759 | orchestrator | Saturday 13 September 2025 01:02:06 +0000 (0:00:00.279) 0:00:39.475 **** 2025-09-13 01:04:21.212770 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 01:04:21.212780 | orchestrator | 2025-09-13 01:04:21.212791 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-09-13 01:04:21.212802 | orchestrator | Saturday 13 September 2025 01:02:07 +0000 (0:00:00.512) 0:00:39.987 **** 2025-09-13 01:04:21.212838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-13 01:04:21.212859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-13 01:04:21.212879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-13 01:04:21.212891 | orchestrator | 2025-09-13 01:04:21.212903 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-09-13 01:04:21.212913 | orchestrator | Saturday 13 September 2025 01:02:11 +0000 (0:00:04.243) 0:00:44.231 **** 2025-09-13 01:04:21.212938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-13 01:04:21.212951 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:04:21.212964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-13 01:04:21.212982 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:04:21.213006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-13 01:04:21.213019 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:04:21.213030 | orchestrator | 2025-09-13 01:04:21.213041 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-09-13 01:04:21.213051 | orchestrator | Saturday 13 September 2025 01:02:16 +0000 (0:00:05.307) 0:00:49.538 **** 2025-09-13 01:04:21.213063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-13 01:04:21.213081 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:04:21.213098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-13 01:04:21.213111 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:04:21.213128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-13 01:04:21.213152 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:04:21.213164 | orchestrator | 2025-09-13 01:04:21.213174 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-09-13 01:04:21.213185 | orchestrator | Saturday 13 September 2025 01:02:20 +0000 (0:00:03.384) 0:00:52.923 **** 2025-09-13 01:04:21.213196 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:04:21.213207 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:04:21.213217 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:04:21.213228 | orchestrator | 2025-09-13 01:04:21.213239 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-09-13 01:04:21.213250 | orchestrator | Saturday 13 September 2025 01:02:24 +0000 (0:00:04.300) 0:00:57.223 **** 2025-09-13 01:04:21.213267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-13 01:04:21.213285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-13 01:04:21.213305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-13 01:04:21.213317 | orchestrator | 2025-09-13 01:04:21.213328 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-09-13 01:04:21.213339 | orchestrator | Saturday 13 September 2025 01:02:29 +0000 (0:00:04.654) 0:01:01.877 **** 2025-09-13 01:04:21.213350 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:04:21.213360 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:04:21.213371 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:04:21.213382 | orchestrator | 2025-09-13 01:04:21.213392 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-09-13 01:04:21.213403 | orchestrator | Saturday 13 September 2025 01:02:35 +0000 (0:00:06.521) 0:01:08.398 **** 2025-09-13 01:04:21.213414 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:04:21.213424 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:04:21.213435 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:04:21.213446 | orchestrator | 2025-09-13 01:04:21.213457 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-09-13 01:04:21.213611 | orchestrator | Saturday 13 September 2025 01:02:42 +0000 (0:00:06.435) 0:01:14.834 **** 2025-09-13 01:04:21.213627 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:04:21.213638 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:04:21.213656 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:04:21.213667 | orchestrator | 2025-09-13 01:04:21.213677 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-09-13 01:04:21.213688 | orchestrator | Saturday 13 September 2025 01:02:47 +0000 (0:00:05.716) 0:01:20.550 **** 2025-09-13 01:04:21.213699 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:04:21.213710 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:04:21.213727 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:04:21.213738 | orchestrator | 2025-09-13 01:04:21.213749 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-09-13 01:04:21.213760 | orchestrator | Saturday 13 September 2025 01:02:52 +0000 (0:00:04.455) 0:01:25.006 **** 2025-09-13 01:04:21.213771 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:04:21.213781 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:04:21.213792 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:04:21.213803 | orchestrator | 2025-09-13 01:04:21.213831 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-09-13 01:04:21.213842 | orchestrator | Saturday 13 September 2025 01:02:56 +0000 (0:00:04.507) 0:01:29.513 **** 2025-09-13 01:04:21.213853 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:04:21.213864 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:04:21.213875 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:04:21.213885 | orchestrator | 2025-09-13 01:04:21.213896 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-09-13 01:04:21.213906 | orchestrator | Saturday 13 September 2025 01:02:57 +0000 (0:00:00.279) 0:01:29.793 **** 2025-09-13 01:04:21.213917 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-13 01:04:21.213928 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:04:21.213939 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-13 01:04:21.213950 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:04:21.213961 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-13 01:04:21.213972 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:04:21.213982 | orchestrator | 2025-09-13 01:04:21.213993 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-09-13 01:04:21.214004 | orchestrator | Saturday 13 September 2025 01:02:59 +0000 (0:00:02.809) 0:01:32.603 **** 2025-09-13 01:04:21.214061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-13 01:04:21.214101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-13 01:04:21.214114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-13 01:04:21.214127 | orchestrator | 2025-09-13 01:04:21.214138 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-13 01:04:21.214149 | orchestrator | Saturday 13 September 2025 01:03:03 +0000 (0:00:04.022) 0:01:36.625 **** 2025-09-13 01:04:21.214160 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:04:21.214170 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:04:21.214181 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:04:21.214198 | orchestrator | 2025-09-13 01:04:21.214209 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-09-13 01:04:21.214220 | orchestrator | Saturday 13 September 2025 01:03:04 +0000 (0:00:00.260) 0:01:36.886 **** 2025-09-13 01:04:21.214231 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:04:21.214244 | orchestrator | 2025-09-13 01:04:21.214256 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-09-13 01:04:21.214268 | orchestrator | Saturday 13 September 2025 01:03:06 +0000 (0:00:01.929) 0:01:38.816 **** 2025-09-13 01:04:21.214281 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:04:21.214294 | orchestrator | 2025-09-13 01:04:21.214306 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-09-13 01:04:21.214318 | orchestrator | Saturday 13 September 2025 01:03:07 +0000 (0:00:01.762) 0:01:40.578 **** 2025-09-13 01:04:21.214331 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:04:21.214344 | orchestrator | 2025-09-13 01:04:21.214356 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-09-13 01:04:21.214368 | orchestrator | Saturday 13 September 2025 01:03:09 +0000 (0:00:02.014) 0:01:42.592 **** 2025-09-13 01:04:21.214381 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:04:21.214392 | orchestrator | 2025-09-13 01:04:21.214405 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-09-13 01:04:21.214418 | orchestrator | Saturday 13 September 2025 01:03:35 +0000 (0:00:25.950) 0:02:08.542 **** 2025-09-13 01:04:21.214431 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:04:21.214443 | orchestrator | 2025-09-13 01:04:21.214460 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-13 01:04:21.214473 | orchestrator | Saturday 13 September 2025 01:03:38 +0000 (0:00:02.205) 0:02:10.747 **** 2025-09-13 01:04:21.214485 | orchestrator | 2025-09-13 01:04:21.214498 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-13 01:04:21.214510 | orchestrator | Saturday 13 September 2025 01:03:38 +0000 (0:00:00.063) 0:02:10.810 **** 2025-09-13 01:04:21.214522 | orchestrator | 2025-09-13 01:04:21.214535 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-13 01:04:21.214552 | orchestrator | Saturday 13 September 2025 01:03:38 +0000 (0:00:00.072) 0:02:10.883 **** 2025-09-13 01:04:21.214565 | orchestrator | 2025-09-13 01:04:21.214577 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-09-13 01:04:21.214590 | orchestrator | Saturday 13 September 2025 01:03:38 +0000 (0:00:00.062) 0:02:10.946 **** 2025-09-13 01:04:21.214601 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:04:21.214612 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:04:21.214623 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:04:21.214634 | orchestrator | 2025-09-13 01:04:21.214645 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 01:04:21.214657 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-13 01:04:21.214669 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-13 01:04:21.214680 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-13 01:04:21.214690 | orchestrator | 2025-09-13 01:04:21.214701 | orchestrator | 2025-09-13 01:04:21.214712 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 01:04:21.214723 | orchestrator | Saturday 13 September 2025 01:04:20 +0000 (0:00:42.060) 0:02:53.007 **** 2025-09-13 01:04:21.214734 | orchestrator | =============================================================================== 2025-09-13 01:04:21.214745 | orchestrator | glance : Restart glance-api container ---------------------------------- 42.06s 2025-09-13 01:04:21.214756 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 25.95s 2025-09-13 01:04:21.214773 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.52s 2025-09-13 01:04:21.214783 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 6.44s 2025-09-13 01:04:21.214794 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.43s 2025-09-13 01:04:21.214805 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 5.72s 2025-09-13 01:04:21.214835 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 5.31s 2025-09-13 01:04:21.214846 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 5.09s 2025-09-13 01:04:21.214857 | orchestrator | glance : Copying over config.json files for services -------------------- 4.65s 2025-09-13 01:04:21.214868 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.51s 2025-09-13 01:04:21.214878 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.46s 2025-09-13 01:04:21.214889 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.30s 2025-09-13 01:04:21.214900 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.24s 2025-09-13 01:04:21.214911 | orchestrator | glance : Check glance containers ---------------------------------------- 4.02s 2025-09-13 01:04:21.214922 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.95s 2025-09-13 01:04:21.214932 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.75s 2025-09-13 01:04:21.214943 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.73s 2025-09-13 01:04:21.214954 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.42s 2025-09-13 01:04:21.214964 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.38s 2025-09-13 01:04:21.214975 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.37s 2025-09-13 01:04:21.214986 | orchestrator | 2025-09-13 01:04:21 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:04:21.214997 | orchestrator | 2025-09-13 01:04:21 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:04:24.280944 | orchestrator | 2025-09-13 01:04:24 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:04:24.281256 | orchestrator | 2025-09-13 01:04:24 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state STARTED 2025-09-13 01:04:24.282668 | orchestrator | 2025-09-13 01:04:24 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:04:24.284769 | orchestrator | 2025-09-13 01:04:24 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:04:24.285412 | orchestrator | 2025-09-13 01:04:24 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:04:27.325080 | orchestrator | 2025-09-13 01:04:27 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:04:27.330411 | orchestrator | 2025-09-13 01:04:27 | INFO  | Task a99a8ed4-3566-41e8-b8b3-6d2f74edf390 is in state SUCCESS 2025-09-13 01:04:27.333104 | orchestrator | 2025-09-13 01:04:27.333251 | orchestrator | 2025-09-13 01:04:27.333266 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-13 01:04:27.333274 | orchestrator | 2025-09-13 01:04:27.333281 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-13 01:04:27.333288 | orchestrator | Saturday 13 September 2025 01:01:19 +0000 (0:00:00.257) 0:00:00.257 **** 2025-09-13 01:04:27.333295 | orchestrator | ok: [testbed-manager] 2025-09-13 01:04:27.333303 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:04:27.333309 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:04:27.333330 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:04:27.333337 | orchestrator | ok: [testbed-node-3] 2025-09-13 01:04:27.333344 | orchestrator | ok: [testbed-node-4] 2025-09-13 01:04:27.333367 | orchestrator | ok: [testbed-node-5] 2025-09-13 01:04:27.333374 | orchestrator | 2025-09-13 01:04:27.333381 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-13 01:04:27.333387 | orchestrator | Saturday 13 September 2025 01:01:20 +0000 (0:00:00.844) 0:00:01.102 **** 2025-09-13 01:04:27.333394 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-09-13 01:04:27.333402 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-09-13 01:04:27.333408 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-09-13 01:04:27.333415 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-09-13 01:04:27.333421 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-09-13 01:04:27.333428 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-09-13 01:04:27.333472 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-09-13 01:04:27.333481 | orchestrator | 2025-09-13 01:04:27.333488 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-09-13 01:04:27.333494 | orchestrator | 2025-09-13 01:04:27.333501 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-13 01:04:27.333508 | orchestrator | Saturday 13 September 2025 01:01:21 +0000 (0:00:00.944) 0:00:02.047 **** 2025-09-13 01:04:27.333515 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 01:04:27.333524 | orchestrator | 2025-09-13 01:04:27.333530 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-09-13 01:04:27.333537 | orchestrator | Saturday 13 September 2025 01:01:22 +0000 (0:00:01.686) 0:00:03.733 **** 2025-09-13 01:04:27.333546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-13 01:04:27.333556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-13 01:04:27.333565 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-13 01:04:27.333574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.333602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.333616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-13 01:04:27.333623 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-13 01:04:27.333632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.333640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.333647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.333654 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.333664 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-13 01:04:27.333970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.333988 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-13 01:04:27.333997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.334005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.334014 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.334058 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-13 01:04:27.334077 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-13 01:04:27.334095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.334103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.334110 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.334118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.334126 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.334133 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.334140 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.334152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.334166 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.334174 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.334181 | orchestrator | 2025-09-13 01:04:27.334188 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-13 01:04:27.334195 | orchestrator | Saturday 13 September 2025 01:01:26 +0000 (0:00:04.073) 0:00:07.807 **** 2025-09-13 01:04:27.334202 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 01:04:27.334209 | orchestrator | 2025-09-13 01:04:27.334216 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-09-13 01:04:27.334223 | orchestrator | Saturday 13 September 2025 01:01:28 +0000 (0:00:01.498) 0:00:09.305 **** 2025-09-13 01:04:27.334231 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-13 01:04:27.334238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-13 01:04:27.334245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-13 01:04:27.334257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-13 01:04:27.334269 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-13 01:04:27.334279 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-13 01:04:27.334286 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-13 01:04:27.334294 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-13 01:04:27.334301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.334308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.334319 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.334327 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.334338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.334349 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.334356 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.334363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.334371 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.334378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.334393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.334400 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.334411 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.334421 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-13 01:04:27.334429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.334436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.334448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.334455 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.334462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.334473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.334484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.334491 | orchestrator | 2025-09-13 01:04:27.334498 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-09-13 01:04:27.334505 | orchestrator | Saturday 13 September 2025 01:01:34 +0000 (0:00:06.226) 0:00:15.531 **** 2025-09-13 01:04:27.334512 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-13 01:04:27.334519 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-13 01:04:27.334530 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-13 01:04:27.334538 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-13 01:04:27.334549 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 01:04:27.334557 | orchestrator | skipping: [testbed-manager] 2025-09-13 01:04:27.334567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-13 01:04:27.334574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 01:04:27.334582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 01:04:27.334589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-13 01:04:27.334601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 01:04:27.334608 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:04:27.334615 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-13 01:04:27.334622 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-13 01:04:27.334634 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-13 01:04:27.334641 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:04:27.334651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-13 01:04:27.334659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-13 01:04:27.334666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 01:04:27.334677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 01:04:27.334684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 01:04:27.334691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-13 01:04:27.334698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 01:04:27.334710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 01:04:27.334717 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:04:27.334727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-13 01:04:27.334734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 01:04:27.334749 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:04:27.334756 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-13 01:04:27.334763 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-13 01:04:27.334770 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-13 01:04:27.334777 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-13 01:04:27.334784 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:04:27.334791 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-13 01:04:27.334806 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-13 01:04:27.334827 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:04:27.334834 | orchestrator | 2025-09-13 01:04:27.334841 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-09-13 01:04:27.334848 | orchestrator | Saturday 13 September 2025 01:01:36 +0000 (0:00:01.457) 0:00:16.988 **** 2025-09-13 01:04:27.334855 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-13 01:04:27.334867 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-13 01:04:27.334874 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-13 01:04:27.334881 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-13 01:04:27.334888 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 01:04:27.334895 | orchestrator | skipping: [testbed-manager] 2025-09-13 01:04:27.334910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-13 01:04:27.334917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 01:04:27.334929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 01:04:27.334935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-13 01:04:27.334943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 01:04:27.334950 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:04:27.334957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-13 01:04:27.334964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 01:04:27.334971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 01:04:27.334987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-13 01:04:27.334999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 01:04:27.335007 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:04:27.335014 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-13 01:04:27.335020 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-13 01:04:27.335028 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-13 01:04:27.335034 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:04:27.335041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-13 01:04:27.335048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 01:04:27.335059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 01:04:27.335069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-13 01:04:27.335081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-13 01:04:27.335088 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:04:27.335095 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-13 01:04:27.335102 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-13 01:04:27.335109 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-13 01:04:27.335116 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:04:27.335123 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-13 01:04:27.335129 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-13 01:04:27.335460 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-13 01:04:27.335482 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:04:27.335489 | orchestrator | 2025-09-13 01:04:27.335495 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-09-13 01:04:27.335507 | orchestrator | Saturday 13 September 2025 01:01:37 +0000 (0:00:01.847) 0:00:18.836 **** 2025-09-13 01:04:27.335514 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-13 01:04:27.335522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-13 01:04:27.335529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-13 01:04:27.335536 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-13 01:04:27.335543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-13 01:04:27.335550 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-13 01:04:27.335566 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-13 01:04:27.335577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.335585 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-13 01:04:27.335639 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.335684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.335693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.335700 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.335708 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.335724 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.335736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.335743 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.335750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.335757 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-13 01:04:27.335765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.335772 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.335787 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.335798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.335805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.335838 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.335846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.335852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.335859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.335871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.335878 | orchestrator | 2025-09-13 01:04:27.335884 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-09-13 01:04:27.335891 | orchestrator | Saturday 13 September 2025 01:01:43 +0000 (0:00:05.357) 0:00:24.193 **** 2025-09-13 01:04:27.335898 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-13 01:04:27.335905 | orchestrator | 2025-09-13 01:04:27.335912 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-09-13 01:04:27.335922 | orchestrator | Saturday 13 September 2025 01:01:44 +0000 (0:00:01.030) 0:00:25.223 **** 2025-09-13 01:04:27.335933 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1845591, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.335037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.335942 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1845591, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.335037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.335949 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1845591, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.335037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.335956 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1845601, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3381073, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.335963 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1845591, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.335037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.335976 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1845601, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3381073, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.335986 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1845601, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3381073, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.336000 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1845591, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.335037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.336007 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1845591, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.335037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.336014 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1845589, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3340993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.336021 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1845601, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3381073, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.336028 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1845589, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3340993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.336039 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1845601, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3381073, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.336049 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1845589, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3340993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.336060 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1845589, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3340993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.336067 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1845591, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.335037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-13 01:04:27.336074 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1845601, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3381073, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.336081 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1845597, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3369057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.336092 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1845597, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3369057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.336101 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1845589, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3340993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.336117 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1845597, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3369057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.336133 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1845597, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3369057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.336147 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1845587, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.333635, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.336159 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1845589, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3340993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.336171 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1845587, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.333635, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.336764 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1845597, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3369057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.336784 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1845587, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.333635, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.336793 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1845597, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3369057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.336902 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1845592, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3352172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.336914 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1845587, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.333635, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.336922 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1845587, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.333635, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.336930 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1845592, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3352172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.336947 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1845601, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3381073, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-13 01:04:27.336956 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1845587, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.333635, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.336964 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1845596, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3365548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.336998 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1845592, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3352172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337007 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1845592, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3352172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337015 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1845592, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3352172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337029 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1845592, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3352172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337037 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1845596, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3365548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337045 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1845593, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3354511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337053 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1845596, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3365548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337086 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1845596, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3365548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337095 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1845596, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3365548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337104 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1845596, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3365548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337117 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1845593, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3354511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337125 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1845590, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3347843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337134 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1845593, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3354511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337142 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1845593, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3354511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337176 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1845589, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3340993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-13 01:04:27.337186 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1845593, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3354511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337195 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1845600, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3379068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337208 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1845593, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3354511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337216 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1845590, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3347843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337225 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1845590, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3347843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337233 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1845590, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3347843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337302 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1845590, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3347843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337314 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1845600, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3379068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337323 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1845585, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3331943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337341 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1845590, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3347843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337349 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1845600, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3379068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337358 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1845600, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3379068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337366 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1845600, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3379068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337399 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1845585, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3331943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337408 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1845607, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3393755, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337422 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1845600, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3379068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337431 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1845585, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3331943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337439 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1845597, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3369057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-13 01:04:27.337447 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1845585, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3331943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337455 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1845585, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3331943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337489 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1845599, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.337314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337499 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1845607, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3393755, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337512 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1845607, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3393755, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337521 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1845599, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.337314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337529 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1845588, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.333864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337537 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1845585, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3331943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337545 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1845607, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3393755, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337562 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1845607, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3393755, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337571 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1845599, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.337314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337584 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1845599, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.337314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337592 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1845586, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3334374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337600 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1845588, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.333864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337608 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1845599, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.337314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337617 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1845607, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3393755, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337693 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1845595, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3358586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337714 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1845588, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.333864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337722 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1845594, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3356752, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337730 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1845588, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.333864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337739 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1845586, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3334374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337747 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1845586, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3334374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337755 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1845587, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.333635, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-13 01:04:27.337769 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1845599, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.337314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337786 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1845588, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.333864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337871 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1845595, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3358586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337880 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1845606, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.338983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337889 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:04:27.337897 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1845588, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.333864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337906 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1845595, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3358586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337914 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1845594, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3356752, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337927 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1845586, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3334374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337946 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1845586, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3334374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337955 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1845606, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.338983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337963 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:04:27.337971 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1845594, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3356752, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337979 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1845595, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3358586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337987 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1845595, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3358586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.337996 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1845586, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3334374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.338013 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1845594, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3356752, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.338054 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1845594, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3356752, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.338062 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1845592, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3352172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-13 01:04:27.338071 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1845606, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.338983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.338079 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:04:27.338087 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1845595, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3358586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.338095 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1845606, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.338983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.338103 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:04:27.338112 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1845606, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.338983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.338128 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:04:27.338148 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1845594, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3356752, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.338156 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1845606, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.338983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-13 01:04:27.338164 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:04:27.338173 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1845596, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3365548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-13 01:04:27.338181 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1845593, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3354511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-13 01:04:27.338189 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1845590, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3347843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-13 01:04:27.338198 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1845600, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3379068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-13 01:04:27.338211 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1845585, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3331943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-13 01:04:27.338227 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1845607, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3393755, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-13 01:04:27.338235 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1845599, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.337314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-13 01:04:27.338244 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1845588, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.333864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-13 01:04:27.338252 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1845586, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3334374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-13 01:04:27.338260 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1845595, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3358586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-13 01:04:27.338268 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1845594, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3356752, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-13 01:04:27.338282 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1845606, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.338983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-13 01:04:27.338290 | orchestrator | 2025-09-13 01:04:27.338298 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-09-13 01:04:27.338307 | orchestrator | Saturday 13 September 2025 01:02:10 +0000 (0:00:26.540) 0:00:51.764 **** 2025-09-13 01:04:27.338315 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-13 01:04:27.338325 | orchestrator | 2025-09-13 01:04:27.338338 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-09-13 01:04:27.338347 | orchestrator | Saturday 13 September 2025 01:02:11 +0000 (0:00:00.783) 0:00:52.547 **** 2025-09-13 01:04:27.338357 | orchestrator | [WARNING]: Skipped 2025-09-13 01:04:27.338367 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-13 01:04:27.338376 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-09-13 01:04:27.338389 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-13 01:04:27.338399 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-09-13 01:04:27.338408 | orchestrator | [WARNING]: Skipped 2025-09-13 01:04:27.338417 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-13 01:04:27.338426 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-09-13 01:04:27.338435 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-13 01:04:27.338445 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-09-13 01:04:27.338454 | orchestrator | [WARNING]: Skipped 2025-09-13 01:04:27.338463 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-13 01:04:27.338471 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-09-13 01:04:27.338480 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-13 01:04:27.338490 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-09-13 01:04:27.338498 | orchestrator | [WARNING]: Skipped 2025-09-13 01:04:27.338507 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-13 01:04:27.338516 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-09-13 01:04:27.338524 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-13 01:04:27.338533 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-09-13 01:04:27.338542 | orchestrator | [WARNING]: Skipped 2025-09-13 01:04:27.338551 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-13 01:04:27.338560 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-09-13 01:04:27.338569 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-13 01:04:27.338578 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-09-13 01:04:27.338586 | orchestrator | [WARNING]: Skipped 2025-09-13 01:04:27.338595 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-13 01:04:27.338604 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-09-13 01:04:27.338613 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-13 01:04:27.338658 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-09-13 01:04:27.338668 | orchestrator | [WARNING]: Skipped 2025-09-13 01:04:27.338675 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-13 01:04:27.338683 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-09-13 01:04:27.338691 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-13 01:04:27.338704 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-09-13 01:04:27.338718 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-13 01:04:27.338732 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-13 01:04:27.338747 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-13 01:04:27.338760 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-13 01:04:27.338773 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-13 01:04:27.338790 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-13 01:04:27.338805 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-13 01:04:27.338835 | orchestrator | 2025-09-13 01:04:27.338848 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-09-13 01:04:27.338861 | orchestrator | Saturday 13 September 2025 01:02:14 +0000 (0:00:02.707) 0:00:55.255 **** 2025-09-13 01:04:27.338875 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-13 01:04:27.338889 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:04:27.338901 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-13 01:04:27.338914 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:04:27.338927 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-13 01:04:27.338939 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:04:27.338952 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-13 01:04:27.338966 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:04:27.338979 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-13 01:04:27.338994 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:04:27.339009 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-13 01:04:27.339024 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:04:27.339039 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-09-13 01:04:27.339053 | orchestrator | 2025-09-13 01:04:27.339068 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-09-13 01:04:27.339082 | orchestrator | Saturday 13 September 2025 01:02:34 +0000 (0:00:20.313) 0:01:15.569 **** 2025-09-13 01:04:27.339097 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-13 01:04:27.339122 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:04:27.339138 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-13 01:04:27.339153 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:04:27.339168 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-13 01:04:27.339182 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:04:27.339206 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-13 01:04:27.339221 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:04:27.339236 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-13 01:04:27.339250 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:04:27.339265 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-13 01:04:27.339280 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:04:27.339305 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-09-13 01:04:27.339320 | orchestrator | 2025-09-13 01:04:27.339336 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-09-13 01:04:27.339352 | orchestrator | Saturday 13 September 2025 01:02:38 +0000 (0:00:03.628) 0:01:19.197 **** 2025-09-13 01:04:27.339368 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-13 01:04:27.339383 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-13 01:04:27.339399 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:04:27.339415 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:04:27.339430 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-13 01:04:27.339444 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:04:27.339459 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-09-13 01:04:27.339473 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-13 01:04:27.339487 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:04:27.339502 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-13 01:04:27.339517 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:04:27.339532 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-13 01:04:27.339545 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:04:27.339559 | orchestrator | 2025-09-13 01:04:27.339573 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-09-13 01:04:27.339587 | orchestrator | Saturday 13 September 2025 01:02:41 +0000 (0:00:02.968) 0:01:22.166 **** 2025-09-13 01:04:27.339601 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-13 01:04:27.339616 | orchestrator | 2025-09-13 01:04:27.339630 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-09-13 01:04:27.339645 | orchestrator | Saturday 13 September 2025 01:02:42 +0000 (0:00:00.822) 0:01:22.988 **** 2025-09-13 01:04:27.339658 | orchestrator | skipping: [testbed-manager] 2025-09-13 01:04:27.339672 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:04:27.339686 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:04:27.339701 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:04:27.339716 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:04:27.339730 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:04:27.339746 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:04:27.339760 | orchestrator | 2025-09-13 01:04:27.339776 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-09-13 01:04:27.339791 | orchestrator | Saturday 13 September 2025 01:02:42 +0000 (0:00:00.840) 0:01:23.829 **** 2025-09-13 01:04:27.339805 | orchestrator | skipping: [testbed-manager] 2025-09-13 01:04:27.339843 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:04:27.339857 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:04:27.339872 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:04:27.339886 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:04:27.339900 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:04:27.339915 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:04:27.339930 | orchestrator | 2025-09-13 01:04:27.339944 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-09-13 01:04:27.339958 | orchestrator | Saturday 13 September 2025 01:02:46 +0000 (0:00:03.942) 0:01:27.772 **** 2025-09-13 01:04:27.339974 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-13 01:04:27.340003 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-13 01:04:27.340018 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:04:27.340032 | orchestrator | skipping: [testbed-manager] 2025-09-13 01:04:27.340047 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-13 01:04:27.340062 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:04:27.340077 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-13 01:04:27.340091 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:04:27.340117 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-13 01:04:27.340132 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:04:27.340147 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-13 01:04:27.340160 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:04:27.340173 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-13 01:04:27.340187 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:04:27.340201 | orchestrator | 2025-09-13 01:04:27.340225 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-09-13 01:04:27.340240 | orchestrator | Saturday 13 September 2025 01:02:49 +0000 (0:00:02.333) 0:01:30.105 **** 2025-09-13 01:04:27.340255 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-13 01:04:27.340269 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:04:27.340283 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-13 01:04:27.340297 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-13 01:04:27.340311 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:04:27.340326 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:04:27.340341 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-09-13 01:04:27.340355 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-13 01:04:27.340370 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:04:27.340384 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-13 01:04:27.340399 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:04:27.340414 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-13 01:04:27.340428 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:04:27.340443 | orchestrator | 2025-09-13 01:04:27.340457 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-09-13 01:04:27.340471 | orchestrator | Saturday 13 September 2025 01:02:51 +0000 (0:00:01.895) 0:01:32.001 **** 2025-09-13 01:04:27.340484 | orchestrator | [WARNING]: Skipped 2025-09-13 01:04:27.340494 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-09-13 01:04:27.340502 | orchestrator | due to this access issue: 2025-09-13 01:04:27.340511 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-09-13 01:04:27.340519 | orchestrator | not a directory 2025-09-13 01:04:27.340528 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-13 01:04:27.340536 | orchestrator | 2025-09-13 01:04:27.340545 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-09-13 01:04:27.340554 | orchestrator | Saturday 13 September 2025 01:02:52 +0000 (0:00:01.227) 0:01:33.228 **** 2025-09-13 01:04:27.340562 | orchestrator | skipping: [testbed-manager] 2025-09-13 01:04:27.340571 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:04:27.340589 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:04:27.340598 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:04:27.340607 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:04:27.340615 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:04:27.340624 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:04:27.340632 | orchestrator | 2025-09-13 01:04:27.340641 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-09-13 01:04:27.340649 | orchestrator | Saturday 13 September 2025 01:02:53 +0000 (0:00:01.150) 0:01:34.379 **** 2025-09-13 01:04:27.340658 | orchestrator | skipping: [testbed-manager] 2025-09-13 01:04:27.340666 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:04:27.340675 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:04:27.340683 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:04:27.340692 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:04:27.340700 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:04:27.340709 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:04:27.340717 | orchestrator | 2025-09-13 01:04:27.340726 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-09-13 01:04:27.340734 | orchestrator | Saturday 13 September 2025 01:02:54 +0000 (0:00:01.069) 0:01:35.448 **** 2025-09-13 01:04:27.340745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-13 01:04:27.340765 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-13 01:04:27.340781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-13 01:04:27.340791 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-13 01:04:27.340800 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-13 01:04:27.340850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-13 01:04:27.340867 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-13 01:04:27.340883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.340897 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-13 01:04:27.340913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.340928 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.340938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.340947 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.340969 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.340979 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.340988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.341003 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-13 01:04:27.341014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.341024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.341039 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.341048 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.341058 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.341067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.341168 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.341196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.341210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-13 01:04:27.341219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.341235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.341244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-13 01:04:27.341253 | orchestrator | 2025-09-13 01:04:27.341262 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-09-13 01:04:27.341271 | orchestrator | Saturday 13 September 2025 01:02:58 +0000 (0:00:04.357) 0:01:39.806 **** 2025-09-13 01:04:27.341280 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-13 01:04:27.341288 | orchestrator | skipping: [testbed-manager] 2025-09-13 01:04:27.341297 | orchestrator | 2025-09-13 01:04:27.341306 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-13 01:04:27.341314 | orchestrator | Saturday 13 September 2025 01:02:59 +0000 (0:00:00.991) 0:01:40.797 **** 2025-09-13 01:04:27.341323 | orchestrator | 2025-09-13 01:04:27.341331 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-13 01:04:27.341340 | orchestrator | Saturday 13 September 2025 01:02:59 +0000 (0:00:00.062) 0:01:40.859 **** 2025-09-13 01:04:27.341349 | orchestrator | 2025-09-13 01:04:27.341357 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-13 01:04:27.341366 | orchestrator | Saturday 13 September 2025 01:03:00 +0000 (0:00:00.063) 0:01:40.923 **** 2025-09-13 01:04:27.341374 | orchestrator | 2025-09-13 01:04:27.341383 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-13 01:04:27.341391 | orchestrator | Saturday 13 September 2025 01:03:00 +0000 (0:00:00.058) 0:01:40.982 **** 2025-09-13 01:04:27.341398 | orchestrator | 2025-09-13 01:04:27.341406 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-13 01:04:27.341414 | orchestrator | Saturday 13 September 2025 01:03:00 +0000 (0:00:00.177) 0:01:41.160 **** 2025-09-13 01:04:27.341422 | orchestrator | 2025-09-13 01:04:27.341429 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-13 01:04:27.341437 | orchestrator | Saturday 13 September 2025 01:03:00 +0000 (0:00:00.062) 0:01:41.222 **** 2025-09-13 01:04:27.341445 | orchestrator | 2025-09-13 01:04:27.341453 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-13 01:04:27.341460 | orchestrator | Saturday 13 September 2025 01:03:00 +0000 (0:00:00.063) 0:01:41.286 **** 2025-09-13 01:04:27.341468 | orchestrator | 2025-09-13 01:04:27.341476 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-09-13 01:04:27.341483 | orchestrator | Saturday 13 September 2025 01:03:00 +0000 (0:00:00.104) 0:01:41.391 **** 2025-09-13 01:04:27.341491 | orchestrator | changed: [testbed-manager] 2025-09-13 01:04:27.341499 | orchestrator | 2025-09-13 01:04:27.341507 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-09-13 01:04:27.341519 | orchestrator | Saturday 13 September 2025 01:03:20 +0000 (0:00:20.209) 0:02:01.600 **** 2025-09-13 01:04:27.341532 | orchestrator | changed: [testbed-node-4] 2025-09-13 01:04:27.341539 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:04:27.341547 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:04:27.341555 | orchestrator | changed: [testbed-node-5] 2025-09-13 01:04:27.341563 | orchestrator | changed: [testbed-manager] 2025-09-13 01:04:27.341570 | orchestrator | changed: [testbed-node-3] 2025-09-13 01:04:27.341578 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:04:27.341586 | orchestrator | 2025-09-13 01:04:27.341597 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-09-13 01:04:27.341605 | orchestrator | Saturday 13 September 2025 01:03:30 +0000 (0:00:09.595) 0:02:11.195 **** 2025-09-13 01:04:27.341613 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:04:27.341621 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:04:27.341629 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:04:27.341636 | orchestrator | 2025-09-13 01:04:27.341644 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-09-13 01:04:27.341652 | orchestrator | Saturday 13 September 2025 01:03:35 +0000 (0:00:05.338) 0:02:16.534 **** 2025-09-13 01:04:27.341660 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:04:27.341668 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:04:27.341675 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:04:27.341683 | orchestrator | 2025-09-13 01:04:27.341691 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-09-13 01:04:27.341699 | orchestrator | Saturday 13 September 2025 01:03:41 +0000 (0:00:05.510) 0:02:22.044 **** 2025-09-13 01:04:27.341706 | orchestrator | changed: [testbed-node-3] 2025-09-13 01:04:27.341714 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:04:27.341722 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:04:27.341729 | orchestrator | changed: [testbed-node-4] 2025-09-13 01:04:27.341737 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:04:27.341745 | orchestrator | changed: [testbed-manager] 2025-09-13 01:04:27.341752 | orchestrator | changed: [testbed-node-5] 2025-09-13 01:04:27.341760 | orchestrator | 2025-09-13 01:04:27.341768 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-09-13 01:04:27.341776 | orchestrator | Saturday 13 September 2025 01:03:56 +0000 (0:00:15.825) 0:02:37.870 **** 2025-09-13 01:04:27.341783 | orchestrator | changed: [testbed-manager] 2025-09-13 01:04:27.341791 | orchestrator | 2025-09-13 01:04:27.341799 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-09-13 01:04:27.341807 | orchestrator | Saturday 13 September 2025 01:04:07 +0000 (0:00:10.675) 0:02:48.545 **** 2025-09-13 01:04:27.341837 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:04:27.341851 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:04:27.341863 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:04:27.341876 | orchestrator | 2025-09-13 01:04:27.341889 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-09-13 01:04:27.341897 | orchestrator | Saturday 13 September 2025 01:04:12 +0000 (0:00:05.154) 0:02:53.699 **** 2025-09-13 01:04:27.341905 | orchestrator | changed: [testbed-manager] 2025-09-13 01:04:27.341913 | orchestrator | 2025-09-13 01:04:27.341921 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-09-13 01:04:27.341928 | orchestrator | Saturday 13 September 2025 01:04:18 +0000 (0:00:05.301) 0:02:59.000 **** 2025-09-13 01:04:27.341936 | orchestrator | changed: [testbed-node-3] 2025-09-13 01:04:27.341944 | orchestrator | changed: [testbed-node-4] 2025-09-13 01:04:27.341952 | orchestrator | changed: [testbed-node-5] 2025-09-13 01:04:27.341960 | orchestrator | 2025-09-13 01:04:27.341967 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 01:04:27.341976 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-13 01:04:27.341985 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-13 01:04:27.341998 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-13 01:04:27.342006 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-13 01:04:27.342014 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-13 01:04:27.342076 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-13 01:04:27.342084 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-13 01:04:27.342092 | orchestrator | 2025-09-13 01:04:27.342100 | orchestrator | 2025-09-13 01:04:27.342108 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 01:04:27.342116 | orchestrator | Saturday 13 September 2025 01:04:24 +0000 (0:00:06.583) 0:03:05.583 **** 2025-09-13 01:04:27.342124 | orchestrator | =============================================================================== 2025-09-13 01:04:27.342131 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 26.54s 2025-09-13 01:04:27.342139 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 20.31s 2025-09-13 01:04:27.342147 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 20.21s 2025-09-13 01:04:27.342155 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 15.83s 2025-09-13 01:04:27.342163 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 10.67s 2025-09-13 01:04:27.342176 | orchestrator | prometheus : Restart prometheus-node-exporter container ----------------- 9.60s 2025-09-13 01:04:27.342184 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 6.58s 2025-09-13 01:04:27.342192 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.23s 2025-09-13 01:04:27.342200 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 5.51s 2025-09-13 01:04:27.342212 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.36s 2025-09-13 01:04:27.342220 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.34s 2025-09-13 01:04:27.342228 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.30s 2025-09-13 01:04:27.342236 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.15s 2025-09-13 01:04:27.342244 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.36s 2025-09-13 01:04:27.342251 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.07s 2025-09-13 01:04:27.342259 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.94s 2025-09-13 01:04:27.342267 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.63s 2025-09-13 01:04:27.342275 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.97s 2025-09-13 01:04:27.342282 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.71s 2025-09-13 01:04:27.342290 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.33s 2025-09-13 01:04:27.342298 | orchestrator | 2025-09-13 01:04:27 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:04:27.342306 | orchestrator | 2025-09-13 01:04:27 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:04:27.342314 | orchestrator | 2025-09-13 01:04:27 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:04:27.342322 | orchestrator | 2025-09-13 01:04:27 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:04:30.370917 | orchestrator | 2025-09-13 01:04:30 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:04:30.372251 | orchestrator | 2025-09-13 01:04:30 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:04:30.374438 | orchestrator | 2025-09-13 01:04:30 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:04:30.377381 | orchestrator | 2025-09-13 01:04:30 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:04:30.377782 | orchestrator | 2025-09-13 01:04:30 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:04:33.432069 | orchestrator | 2025-09-13 01:04:33 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:04:33.433171 | orchestrator | 2025-09-13 01:04:33 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:04:33.434626 | orchestrator | 2025-09-13 01:04:33 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:04:33.436293 | orchestrator | 2025-09-13 01:04:33 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:04:33.436313 | orchestrator | 2025-09-13 01:04:33 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:04:36.474374 | orchestrator | 2025-09-13 01:04:36 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:04:36.476067 | orchestrator | 2025-09-13 01:04:36 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:04:36.477577 | orchestrator | 2025-09-13 01:04:36 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:04:36.480078 | orchestrator | 2025-09-13 01:04:36 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:04:36.480355 | orchestrator | 2025-09-13 01:04:36 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:04:39.527605 | orchestrator | 2025-09-13 01:04:39 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:04:39.529937 | orchestrator | 2025-09-13 01:04:39 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:04:39.532082 | orchestrator | 2025-09-13 01:04:39 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:04:39.533300 | orchestrator | 2025-09-13 01:04:39 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:04:39.533322 | orchestrator | 2025-09-13 01:04:39 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:04:42.580911 | orchestrator | 2025-09-13 01:04:42 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:04:42.582562 | orchestrator | 2025-09-13 01:04:42 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:04:42.584564 | orchestrator | 2025-09-13 01:04:42 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:04:42.586214 | orchestrator | 2025-09-13 01:04:42 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:04:42.586269 | orchestrator | 2025-09-13 01:04:42 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:04:45.638737 | orchestrator | 2025-09-13 01:04:45 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:04:45.638999 | orchestrator | 2025-09-13 01:04:45 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:04:45.639891 | orchestrator | 2025-09-13 01:04:45 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:04:45.640757 | orchestrator | 2025-09-13 01:04:45 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:04:45.640783 | orchestrator | 2025-09-13 01:04:45 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:04:48.690367 | orchestrator | 2025-09-13 01:04:48 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:04:48.691754 | orchestrator | 2025-09-13 01:04:48 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:04:48.695192 | orchestrator | 2025-09-13 01:04:48 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:04:48.697717 | orchestrator | 2025-09-13 01:04:48 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:04:48.697781 | orchestrator | 2025-09-13 01:04:48 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:04:51.742745 | orchestrator | 2025-09-13 01:04:51 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:04:51.744653 | orchestrator | 2025-09-13 01:04:51 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:04:51.746976 | orchestrator | 2025-09-13 01:04:51 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:04:51.749000 | orchestrator | 2025-09-13 01:04:51 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:04:51.749027 | orchestrator | 2025-09-13 01:04:51 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:04:54.798441 | orchestrator | 2025-09-13 01:04:54 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:04:54.800425 | orchestrator | 2025-09-13 01:04:54 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:04:54.802744 | orchestrator | 2025-09-13 01:04:54 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:04:54.803655 | orchestrator | 2025-09-13 01:04:54 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:04:54.803683 | orchestrator | 2025-09-13 01:04:54 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:04:57.838947 | orchestrator | 2025-09-13 01:04:57 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:04:57.839391 | orchestrator | 2025-09-13 01:04:57 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:04:57.842382 | orchestrator | 2025-09-13 01:04:57 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:04:57.844605 | orchestrator | 2025-09-13 01:04:57 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:04:57.845235 | orchestrator | 2025-09-13 01:04:57 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:05:00.883126 | orchestrator | 2025-09-13 01:05:00 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:05:00.883503 | orchestrator | 2025-09-13 01:05:00 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:05:00.884438 | orchestrator | 2025-09-13 01:05:00 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:05:00.885573 | orchestrator | 2025-09-13 01:05:00 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:05:00.885596 | orchestrator | 2025-09-13 01:05:00 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:05:03.911627 | orchestrator | 2025-09-13 01:05:03 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:05:03.911755 | orchestrator | 2025-09-13 01:05:03 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:05:03.912305 | orchestrator | 2025-09-13 01:05:03 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:05:03.912954 | orchestrator | 2025-09-13 01:05:03 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:05:03.913039 | orchestrator | 2025-09-13 01:05:03 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:05:06.951379 | orchestrator | 2025-09-13 01:05:06 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:05:06.954178 | orchestrator | 2025-09-13 01:05:06 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:05:06.954219 | orchestrator | 2025-09-13 01:05:06 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:05:06.954981 | orchestrator | 2025-09-13 01:05:06 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:05:06.955123 | orchestrator | 2025-09-13 01:05:06 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:05:09.999886 | orchestrator | 2025-09-13 01:05:09 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:05:10.000665 | orchestrator | 2025-09-13 01:05:10 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:05:10.001729 | orchestrator | 2025-09-13 01:05:10 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:05:10.002999 | orchestrator | 2025-09-13 01:05:10 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:05:10.003216 | orchestrator | 2025-09-13 01:05:10 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:05:13.053619 | orchestrator | 2025-09-13 01:05:13 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state STARTED 2025-09-13 01:05:13.053938 | orchestrator | 2025-09-13 01:05:13 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:05:13.054972 | orchestrator | 2025-09-13 01:05:13 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:05:13.055511 | orchestrator | 2025-09-13 01:05:13 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:05:13.055667 | orchestrator | 2025-09-13 01:05:13 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:05:16.088353 | orchestrator | 2025-09-13 01:05:16 | INFO  | Task e7434a3a-461a-4c49-87c7-fb680577cb30 is in state SUCCESS 2025-09-13 01:05:16.089275 | orchestrator | 2025-09-13 01:05:16.089314 | orchestrator | 2025-09-13 01:05:16.089328 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-13 01:05:16.089341 | orchestrator | 2025-09-13 01:05:16.089353 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-13 01:05:16.089366 | orchestrator | Saturday 13 September 2025 01:01:34 +0000 (0:00:00.241) 0:00:00.241 **** 2025-09-13 01:05:16.089378 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:05:16.089391 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:05:16.089403 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:05:16.089414 | orchestrator | ok: [testbed-node-3] 2025-09-13 01:05:16.089426 | orchestrator | ok: [testbed-node-4] 2025-09-13 01:05:16.089437 | orchestrator | ok: [testbed-node-5] 2025-09-13 01:05:16.089448 | orchestrator | 2025-09-13 01:05:16.089460 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-13 01:05:16.089472 | orchestrator | Saturday 13 September 2025 01:01:34 +0000 (0:00:00.655) 0:00:00.897 **** 2025-09-13 01:05:16.089483 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-09-13 01:05:16.089496 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-09-13 01:05:16.089540 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-09-13 01:05:16.089632 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-09-13 01:05:16.089643 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-09-13 01:05:16.089654 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-09-13 01:05:16.089666 | orchestrator | 2025-09-13 01:05:16.089677 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-09-13 01:05:16.089688 | orchestrator | 2025-09-13 01:05:16.089699 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-13 01:05:16.089710 | orchestrator | Saturday 13 September 2025 01:01:35 +0000 (0:00:00.782) 0:00:01.679 **** 2025-09-13 01:05:16.089722 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 01:05:16.089735 | orchestrator | 2025-09-13 01:05:16.089746 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-09-13 01:05:16.089757 | orchestrator | Saturday 13 September 2025 01:01:36 +0000 (0:00:01.096) 0:00:02.776 **** 2025-09-13 01:05:16.089769 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-09-13 01:05:16.089780 | orchestrator | 2025-09-13 01:05:16.089791 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-09-13 01:05:16.089802 | orchestrator | Saturday 13 September 2025 01:01:40 +0000 (0:00:03.242) 0:00:06.018 **** 2025-09-13 01:05:16.089813 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-09-13 01:05:16.089856 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-09-13 01:05:16.089868 | orchestrator | 2025-09-13 01:05:16.089879 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-09-13 01:05:16.089890 | orchestrator | Saturday 13 September 2025 01:01:46 +0000 (0:00:06.303) 0:00:12.321 **** 2025-09-13 01:05:16.089901 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-13 01:05:16.089912 | orchestrator | 2025-09-13 01:05:16.089942 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-09-13 01:05:16.089954 | orchestrator | Saturday 13 September 2025 01:01:49 +0000 (0:00:03.491) 0:00:15.813 **** 2025-09-13 01:05:16.089965 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-13 01:05:16.089976 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-09-13 01:05:16.089987 | orchestrator | 2025-09-13 01:05:16.090260 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-09-13 01:05:16.090273 | orchestrator | Saturday 13 September 2025 01:01:54 +0000 (0:00:04.313) 0:00:20.126 **** 2025-09-13 01:05:16.090285 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-13 01:05:16.090297 | orchestrator | 2025-09-13 01:05:16.090308 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-09-13 01:05:16.090318 | orchestrator | Saturday 13 September 2025 01:01:57 +0000 (0:00:03.279) 0:00:23.406 **** 2025-09-13 01:05:16.090330 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-09-13 01:05:16.090342 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-09-13 01:05:16.090353 | orchestrator | 2025-09-13 01:05:16.090364 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-09-13 01:05:16.090375 | orchestrator | Saturday 13 September 2025 01:02:04 +0000 (0:00:07.157) 0:00:30.564 **** 2025-09-13 01:05:16.090391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-13 01:05:16.090439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-13 01:05:16.090453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-13 01:05:16.090474 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.090487 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.090507 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.090529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.090542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.090553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.090611 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.090626 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.090646 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.090658 | orchestrator | 2025-09-13 01:05:16.090675 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-13 01:05:16.090687 | orchestrator | Saturday 13 September 2025 01:02:06 +0000 (0:00:02.351) 0:00:32.916 **** 2025-09-13 01:05:16.090699 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:05:16.090710 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:05:16.090721 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:05:16.090732 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:05:16.090743 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:05:16.090753 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:05:16.090764 | orchestrator | 2025-09-13 01:05:16.090775 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-13 01:05:16.090786 | orchestrator | Saturday 13 September 2025 01:02:07 +0000 (0:00:00.628) 0:00:33.544 **** 2025-09-13 01:05:16.090797 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:05:16.090808 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:05:16.090837 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:05:16.090849 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 01:05:16.090861 | orchestrator | 2025-09-13 01:05:16.090872 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-09-13 01:05:16.090883 | orchestrator | Saturday 13 September 2025 01:02:08 +0000 (0:00:01.072) 0:00:34.616 **** 2025-09-13 01:05:16.090894 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-09-13 01:05:16.090904 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-09-13 01:05:16.090915 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-09-13 01:05:16.090926 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-09-13 01:05:16.090937 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-09-13 01:05:16.090948 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-09-13 01:05:16.090958 | orchestrator | 2025-09-13 01:05:16.090969 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-09-13 01:05:16.090980 | orchestrator | Saturday 13 September 2025 01:02:10 +0000 (0:00:02.173) 0:00:36.789 **** 2025-09-13 01:05:16.090999 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-13 01:05:16.091013 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-13 01:05:16.091033 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-13 01:05:16.091114 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-13 01:05:16.091129 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-13 01:05:16.091141 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-13 01:05:16.091158 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-13 01:05:16.091179 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-13 01:05:16.091223 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-13 01:05:16.091236 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-13 01:05:16.091255 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-13 01:05:16.091274 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-13 01:05:16.091285 | orchestrator | 2025-09-13 01:05:16.091296 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-09-13 01:05:16.091307 | orchestrator | Saturday 13 September 2025 01:02:15 +0000 (0:00:04.404) 0:00:41.194 **** 2025-09-13 01:05:16.091319 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-13 01:05:16.091330 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-13 01:05:16.091342 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-13 01:05:16.091352 | orchestrator | 2025-09-13 01:05:16.091363 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-09-13 01:05:16.091374 | orchestrator | Saturday 13 September 2025 01:02:17 +0000 (0:00:02.502) 0:00:43.697 **** 2025-09-13 01:05:16.091385 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-09-13 01:05:16.091396 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-09-13 01:05:16.091407 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-09-13 01:05:16.091417 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-09-13 01:05:16.091428 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-09-13 01:05:16.091470 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-09-13 01:05:16.091482 | orchestrator | 2025-09-13 01:05:16.091493 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-09-13 01:05:16.091504 | orchestrator | Saturday 13 September 2025 01:02:20 +0000 (0:00:03.249) 0:00:46.947 **** 2025-09-13 01:05:16.091515 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-09-13 01:05:16.091526 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-09-13 01:05:16.091537 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-09-13 01:05:16.091548 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-09-13 01:05:16.091559 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-09-13 01:05:16.091570 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-09-13 01:05:16.091581 | orchestrator | 2025-09-13 01:05:16.091592 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-09-13 01:05:16.091603 | orchestrator | Saturday 13 September 2025 01:02:22 +0000 (0:00:01.191) 0:00:48.139 **** 2025-09-13 01:05:16.091613 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:05:16.091624 | orchestrator | 2025-09-13 01:05:16.091635 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-09-13 01:05:16.091646 | orchestrator | Saturday 13 September 2025 01:02:22 +0000 (0:00:00.106) 0:00:48.246 **** 2025-09-13 01:05:16.091657 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:05:16.091668 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:05:16.091679 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:05:16.091690 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:05:16.091701 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:05:16.091711 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:05:16.091742 | orchestrator | 2025-09-13 01:05:16.091753 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-13 01:05:16.091764 | orchestrator | Saturday 13 September 2025 01:02:23 +0000 (0:00:00.933) 0:00:49.179 **** 2025-09-13 01:05:16.091776 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 01:05:16.091788 | orchestrator | 2025-09-13 01:05:16.091799 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-09-13 01:05:16.091810 | orchestrator | Saturday 13 September 2025 01:02:24 +0000 (0:00:01.502) 0:00:50.681 **** 2025-09-13 01:05:16.091885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-13 01:05:16.091900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-13 01:05:16.091949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-13 01:05:16.091963 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.091985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.092002 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.092014 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.092026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.092072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.092086 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.092105 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.092122 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.092134 | orchestrator | 2025-09-13 01:05:16.092145 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-09-13 01:05:16.092156 | orchestrator | Saturday 13 September 2025 01:02:28 +0000 (0:00:03.547) 0:00:54.229 **** 2025-09-13 01:05:16.092168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-13 01:05:16.092185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-13 01:05:16.092196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-13 01:05:16.092212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-13 01:05:16.092223 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:05:16.092238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-13 01:05:16.092248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-13 01:05:16.092258 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:05:16.092268 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:05:16.092278 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-13 01:05:16.092295 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-13 01:05:16.092311 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:05:16.092322 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-13 01:05:16.092332 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-13 01:05:16.092342 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:05:16.092356 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-13 01:05:16.092367 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-13 01:05:16.092377 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:05:16.092387 | orchestrator | 2025-09-13 01:05:16.092397 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-09-13 01:05:16.092407 | orchestrator | Saturday 13 September 2025 01:02:29 +0000 (0:00:01.330) 0:00:55.559 **** 2025-09-13 01:05:16.092430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-13 01:05:16.092441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-13 01:05:16.092451 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:05:16.092461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-13 01:05:16.092476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-13 01:05:16.092487 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:05:16.092497 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-13 01:05:16.092519 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-13 01:05:16.092530 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:05:16.092540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-13 01:05:16.092551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-13 01:05:16.092561 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:05:16.092576 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-13 01:05:16.092586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-13 01:05:16.092597 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:05:16.092613 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-13 01:05:16.092630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-13 01:05:16.092726 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:05:16.092738 | orchestrator | 2025-09-13 01:05:16.092748 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-09-13 01:05:16.092758 | orchestrator | Saturday 13 September 2025 01:02:31 +0000 (0:00:01.821) 0:00:57.380 **** 2025-09-13 01:05:16.092768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-13 01:05:16.092787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-13 01:05:16.092798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-13 01:05:16.092846 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.092858 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.092869 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.092884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.092895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.092912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.092929 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.092940 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.092950 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.092960 | orchestrator | 2025-09-13 01:05:16.092970 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-09-13 01:05:16.092980 | orchestrator | Saturday 13 September 2025 01:02:34 +0000 (0:00:03.214) 0:01:00.594 **** 2025-09-13 01:05:16.092995 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-13 01:05:16.093005 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:05:16.093015 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-13 01:05:16.093025 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:05:16.093035 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-13 01:05:16.093045 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-13 01:05:16.093054 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-13 01:05:16.093070 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:05:16.093080 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-13 01:05:16.093090 | orchestrator | 2025-09-13 01:05:16.093099 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-09-13 01:05:16.093109 | orchestrator | Saturday 13 September 2025 01:02:36 +0000 (0:00:02.349) 0:01:02.943 **** 2025-09-13 01:05:16.093119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-13 01:05:16.093135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-13 01:05:16.093146 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.093161 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.093171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-13 01:05:16.093195 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.093205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.093215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.093226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.093241 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.093260 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.093271 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.093281 | orchestrator | 2025-09-13 01:05:16.093291 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-09-13 01:05:16.093301 | orchestrator | Saturday 13 September 2025 01:02:47 +0000 (0:00:11.031) 0:01:13.975 **** 2025-09-13 01:05:16.093316 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:05:16.093326 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:05:16.093336 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:05:16.093346 | orchestrator | changed: [testbed-node-3] 2025-09-13 01:05:16.093358 | orchestrator | changed: [testbed-node-4] 2025-09-13 01:05:16.093370 | orchestrator | changed: [testbed-node-5] 2025-09-13 01:05:16.093381 | orchestrator | 2025-09-13 01:05:16.093393 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-09-13 01:05:16.093404 | orchestrator | Saturday 13 September 2025 01:02:50 +0000 (0:00:02.241) 0:01:16.216 **** 2025-09-13 01:05:16.093417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-13 01:05:16.093429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-13 01:05:16.093448 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:05:16.093465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-13 01:05:16.093478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-13 01:05:16.093490 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:05:16.093509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-13 01:05:16.093522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-13 01:05:16.093534 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:05:16.093546 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-13 01:05:16.093571 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-13 01:05:16.093584 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:05:16.093595 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-13 01:05:16.093608 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-13 01:05:16.093619 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:05:16.093638 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-13 01:05:16.093651 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-13 01:05:16.093673 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:05:16.093686 | orchestrator | 2025-09-13 01:05:16.093698 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-09-13 01:05:16.093710 | orchestrator | Saturday 13 September 2025 01:02:51 +0000 (0:00:01.712) 0:01:17.928 **** 2025-09-13 01:05:16.093720 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:05:16.093730 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:05:16.093740 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:05:16.093749 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:05:16.093759 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:05:16.093769 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:05:16.093779 | orchestrator | 2025-09-13 01:05:16.093788 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-09-13 01:05:16.093798 | orchestrator | Saturday 13 September 2025 01:02:52 +0000 (0:00:00.491) 0:01:18.420 **** 2025-09-13 01:05:16.093816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-13 01:05:16.093875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-13 01:05:16.093895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-13 01:05:16.093906 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.093929 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.093939 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.093950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.093966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.093976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.093993 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.094003 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.094058 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-13 01:05:16.094072 | orchestrator | 2025-09-13 01:05:16.094082 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-13 01:05:16.094092 | orchestrator | Saturday 13 September 2025 01:02:55 +0000 (0:00:02.924) 0:01:21.344 **** 2025-09-13 01:05:16.094101 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:05:16.094109 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:05:16.094118 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:05:16.094126 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:05:16.094133 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:05:16.094141 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:05:16.094230 | orchestrator | 2025-09-13 01:05:16.094243 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-09-13 01:05:16.094251 | orchestrator | Saturday 13 September 2025 01:02:56 +0000 (0:00:01.219) 0:01:22.564 **** 2025-09-13 01:05:16.094299 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:05:16.094308 | orchestrator | 2025-09-13 01:05:16.094316 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-09-13 01:05:16.094324 | orchestrator | Saturday 13 September 2025 01:02:58 +0000 (0:00:02.090) 0:01:24.654 **** 2025-09-13 01:05:16.094332 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:05:16.094340 | orchestrator | 2025-09-13 01:05:16.094348 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-09-13 01:05:16.094356 | orchestrator | Saturday 13 September 2025 01:03:00 +0000 (0:00:01.935) 0:01:26.590 **** 2025-09-13 01:05:16.094364 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:05:16.094372 | orchestrator | 2025-09-13 01:05:16.094380 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-13 01:05:16.094388 | orchestrator | Saturday 13 September 2025 01:03:18 +0000 (0:00:17.567) 0:01:44.158 **** 2025-09-13 01:05:16.094403 | orchestrator | 2025-09-13 01:05:16.094417 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-13 01:05:16.094425 | orchestrator | Saturday 13 September 2025 01:03:18 +0000 (0:00:00.101) 0:01:44.259 **** 2025-09-13 01:05:16.094433 | orchestrator | 2025-09-13 01:05:16.094442 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-13 01:05:16.094450 | orchestrator | Saturday 13 September 2025 01:03:18 +0000 (0:00:00.076) 0:01:44.335 **** 2025-09-13 01:05:16.094458 | orchestrator | 2025-09-13 01:05:16.094466 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-13 01:05:16.094474 | orchestrator | Saturday 13 September 2025 01:03:18 +0000 (0:00:00.067) 0:01:44.403 **** 2025-09-13 01:05:16.094482 | orchestrator | 2025-09-13 01:05:16.094490 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-13 01:05:16.094498 | orchestrator | Saturday 13 September 2025 01:03:18 +0000 (0:00:00.064) 0:01:44.467 **** 2025-09-13 01:05:16.094506 | orchestrator | 2025-09-13 01:05:16.094514 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-13 01:05:16.094522 | orchestrator | Saturday 13 September 2025 01:03:18 +0000 (0:00:00.065) 0:01:44.533 **** 2025-09-13 01:05:16.094530 | orchestrator | 2025-09-13 01:05:16.094538 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-09-13 01:05:16.094546 | orchestrator | Saturday 13 September 2025 01:03:18 +0000 (0:00:00.066) 0:01:44.599 **** 2025-09-13 01:05:16.094554 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:05:16.094562 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:05:16.094570 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:05:16.094578 | orchestrator | 2025-09-13 01:05:16.094586 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-09-13 01:05:16.094594 | orchestrator | Saturday 13 September 2025 01:03:46 +0000 (0:00:27.519) 0:02:12.119 **** 2025-09-13 01:05:16.094602 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:05:16.094610 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:05:16.094618 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:05:16.094626 | orchestrator | 2025-09-13 01:05:16.094634 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-09-13 01:05:16.094642 | orchestrator | Saturday 13 September 2025 01:03:52 +0000 (0:00:06.405) 0:02:18.527 **** 2025-09-13 01:05:16.094650 | orchestrator | changed: [testbed-node-4] 2025-09-13 01:05:16.094658 | orchestrator | changed: [testbed-node-5] 2025-09-13 01:05:16.094665 | orchestrator | changed: [testbed-node-3] 2025-09-13 01:05:16.094673 | orchestrator | 2025-09-13 01:05:16.094681 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-09-13 01:05:16.094689 | orchestrator | Saturday 13 September 2025 01:05:02 +0000 (0:01:09.647) 0:03:28.175 **** 2025-09-13 01:05:16.094698 | orchestrator | changed: [testbed-node-4] 2025-09-13 01:05:16.094705 | orchestrator | changed: [testbed-node-3] 2025-09-13 01:05:16.094713 | orchestrator | changed: [testbed-node-5] 2025-09-13 01:05:16.094721 | orchestrator | 2025-09-13 01:05:16.094729 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-09-13 01:05:16.094737 | orchestrator | Saturday 13 September 2025 01:05:13 +0000 (0:00:11.575) 0:03:39.750 **** 2025-09-13 01:05:16.094745 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:05:16.094753 | orchestrator | 2025-09-13 01:05:16.094761 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 01:05:16.094774 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-13 01:05:16.094784 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-13 01:05:16.094792 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-13 01:05:16.094807 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-13 01:05:16.094815 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-13 01:05:16.094843 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-13 01:05:16.094851 | orchestrator | 2025-09-13 01:05:16.094859 | orchestrator | 2025-09-13 01:05:16.094867 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 01:05:16.094875 | orchestrator | Saturday 13 September 2025 01:05:14 +0000 (0:00:00.686) 0:03:40.437 **** 2025-09-13 01:05:16.094883 | orchestrator | =============================================================================== 2025-09-13 01:05:16.094891 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 69.65s 2025-09-13 01:05:16.094900 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 27.52s 2025-09-13 01:05:16.094907 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 17.57s 2025-09-13 01:05:16.094915 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 11.58s 2025-09-13 01:05:16.094923 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 11.03s 2025-09-13 01:05:16.094931 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.16s 2025-09-13 01:05:16.094941 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 6.41s 2025-09-13 01:05:16.094951 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.30s 2025-09-13 01:05:16.094966 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.42s 2025-09-13 01:05:16.094976 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.31s 2025-09-13 01:05:16.094985 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.55s 2025-09-13 01:05:16.094994 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.49s 2025-09-13 01:05:16.095003 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.28s 2025-09-13 01:05:16.095012 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.25s 2025-09-13 01:05:16.095021 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.24s 2025-09-13 01:05:16.095030 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.21s 2025-09-13 01:05:16.095040 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.92s 2025-09-13 01:05:16.095049 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 2.49s 2025-09-13 01:05:16.095058 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.35s 2025-09-13 01:05:16.095068 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.35s 2025-09-13 01:05:16.095077 | orchestrator | 2025-09-13 01:05:16 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:05:16.095086 | orchestrator | 2025-09-13 01:05:16 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:05:16.095096 | orchestrator | 2025-09-13 01:05:16 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:05:16.095105 | orchestrator | 2025-09-13 01:05:16 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:05:16.095115 | orchestrator | 2025-09-13 01:05:16 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:05:19.128170 | orchestrator | 2025-09-13 01:05:19 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:05:19.128316 | orchestrator | 2025-09-13 01:05:19 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:05:19.128879 | orchestrator | 2025-09-13 01:05:19 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:05:19.130527 | orchestrator | 2025-09-13 01:05:19 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:05:19.131076 | orchestrator | 2025-09-13 01:05:19 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:05:22.154924 | orchestrator | 2025-09-13 01:05:22 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:05:22.155626 | orchestrator | 2025-09-13 01:05:22 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:05:22.156194 | orchestrator | 2025-09-13 01:05:22 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:05:22.156890 | orchestrator | 2025-09-13 01:05:22 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:05:22.156912 | orchestrator | 2025-09-13 01:05:22 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:05:25.181594 | orchestrator | 2025-09-13 01:05:25 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:05:25.181734 | orchestrator | 2025-09-13 01:05:25 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:05:25.182243 | orchestrator | 2025-09-13 01:05:25 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:05:25.183010 | orchestrator | 2025-09-13 01:05:25 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:05:25.183033 | orchestrator | 2025-09-13 01:05:25 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:05:28.215759 | orchestrator | 2025-09-13 01:05:28 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:05:28.216643 | orchestrator | 2025-09-13 01:05:28 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:05:28.217287 | orchestrator | 2025-09-13 01:05:28 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:05:28.217766 | orchestrator | 2025-09-13 01:05:28 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:05:28.217791 | orchestrator | 2025-09-13 01:05:28 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:05:31.246277 | orchestrator | 2025-09-13 01:05:31 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:05:31.246397 | orchestrator | 2025-09-13 01:05:31 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:05:31.246516 | orchestrator | 2025-09-13 01:05:31 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:05:31.247036 | orchestrator | 2025-09-13 01:05:31 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:05:31.247058 | orchestrator | 2025-09-13 01:05:31 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:05:34.280063 | orchestrator | 2025-09-13 01:05:34 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:05:34.280689 | orchestrator | 2025-09-13 01:05:34 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:05:34.281291 | orchestrator | 2025-09-13 01:05:34 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:05:34.281981 | orchestrator | 2025-09-13 01:05:34 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:05:34.282005 | orchestrator | 2025-09-13 01:05:34 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:05:37.305589 | orchestrator | 2025-09-13 01:05:37 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:05:37.305709 | orchestrator | 2025-09-13 01:05:37 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:05:37.306581 | orchestrator | 2025-09-13 01:05:37 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:05:37.307346 | orchestrator | 2025-09-13 01:05:37 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:05:37.307368 | orchestrator | 2025-09-13 01:05:37 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:05:40.334173 | orchestrator | 2025-09-13 01:05:40 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:05:40.335771 | orchestrator | 2025-09-13 01:05:40 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:05:40.337466 | orchestrator | 2025-09-13 01:05:40 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:05:40.339110 | orchestrator | 2025-09-13 01:05:40 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:05:40.340190 | orchestrator | 2025-09-13 01:05:40 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:05:43.375603 | orchestrator | 2025-09-13 01:05:43 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:05:43.375730 | orchestrator | 2025-09-13 01:05:43 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:05:43.375922 | orchestrator | 2025-09-13 01:05:43 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:05:43.376671 | orchestrator | 2025-09-13 01:05:43 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:05:43.376795 | orchestrator | 2025-09-13 01:05:43 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:05:46.419539 | orchestrator | 2025-09-13 01:05:46 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:05:46.420277 | orchestrator | 2025-09-13 01:05:46 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:05:46.422213 | orchestrator | 2025-09-13 01:05:46 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:05:46.423245 | orchestrator | 2025-09-13 01:05:46 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:05:46.423280 | orchestrator | 2025-09-13 01:05:46 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:05:49.475122 | orchestrator | 2025-09-13 01:05:49 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:05:49.475639 | orchestrator | 2025-09-13 01:05:49 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:05:49.476314 | orchestrator | 2025-09-13 01:05:49 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:05:49.476925 | orchestrator | 2025-09-13 01:05:49 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:05:49.476947 | orchestrator | 2025-09-13 01:05:49 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:05:52.525035 | orchestrator | 2025-09-13 01:05:52 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:05:52.526393 | orchestrator | 2025-09-13 01:05:52 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:05:52.528322 | orchestrator | 2025-09-13 01:05:52 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:05:52.530134 | orchestrator | 2025-09-13 01:05:52 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:05:52.531493 | orchestrator | 2025-09-13 01:05:52 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:05:55.580673 | orchestrator | 2025-09-13 01:05:55 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:05:55.582646 | orchestrator | 2025-09-13 01:05:55 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:05:55.584688 | orchestrator | 2025-09-13 01:05:55 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:05:55.586497 | orchestrator | 2025-09-13 01:05:55 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:05:55.586665 | orchestrator | 2025-09-13 01:05:55 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:05:58.617743 | orchestrator | 2025-09-13 01:05:58 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:05:58.618412 | orchestrator | 2025-09-13 01:05:58 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:05:58.620532 | orchestrator | 2025-09-13 01:05:58 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:05:58.621166 | orchestrator | 2025-09-13 01:05:58 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:05:58.621193 | orchestrator | 2025-09-13 01:05:58 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:06:01.651533 | orchestrator | 2025-09-13 01:06:01 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:06:01.652499 | orchestrator | 2025-09-13 01:06:01 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:06:01.653512 | orchestrator | 2025-09-13 01:06:01 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:06:01.656274 | orchestrator | 2025-09-13 01:06:01 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:06:01.656303 | orchestrator | 2025-09-13 01:06:01 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:06:04.684474 | orchestrator | 2025-09-13 01:06:04 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:06:04.685016 | orchestrator | 2025-09-13 01:06:04 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:06:04.687630 | orchestrator | 2025-09-13 01:06:04 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:06:04.688318 | orchestrator | 2025-09-13 01:06:04 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:06:04.688346 | orchestrator | 2025-09-13 01:06:04 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:06:07.716113 | orchestrator | 2025-09-13 01:06:07 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:06:07.716423 | orchestrator | 2025-09-13 01:06:07 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:06:07.717552 | orchestrator | 2025-09-13 01:06:07 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:06:07.718520 | orchestrator | 2025-09-13 01:06:07 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:06:07.718548 | orchestrator | 2025-09-13 01:06:07 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:06:10.743527 | orchestrator | 2025-09-13 01:06:10 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:06:10.743902 | orchestrator | 2025-09-13 01:06:10 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:06:10.744666 | orchestrator | 2025-09-13 01:06:10 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:06:10.745605 | orchestrator | 2025-09-13 01:06:10 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:06:10.745628 | orchestrator | 2025-09-13 01:06:10 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:06:13.773038 | orchestrator | 2025-09-13 01:06:13 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:06:13.773378 | orchestrator | 2025-09-13 01:06:13 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:06:13.774004 | orchestrator | 2025-09-13 01:06:13 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:06:13.774622 | orchestrator | 2025-09-13 01:06:13 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:06:13.774645 | orchestrator | 2025-09-13 01:06:13 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:06:16.806737 | orchestrator | 2025-09-13 01:06:16 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:06:16.808736 | orchestrator | 2025-09-13 01:06:16 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:06:16.811239 | orchestrator | 2025-09-13 01:06:16 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:06:16.812652 | orchestrator | 2025-09-13 01:06:16 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:06:16.812883 | orchestrator | 2025-09-13 01:06:16 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:06:19.844655 | orchestrator | 2025-09-13 01:06:19 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:06:19.845180 | orchestrator | 2025-09-13 01:06:19 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:06:19.846009 | orchestrator | 2025-09-13 01:06:19 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:06:19.847170 | orchestrator | 2025-09-13 01:06:19 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:06:19.847194 | orchestrator | 2025-09-13 01:06:19 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:06:22.883147 | orchestrator | 2025-09-13 01:06:22 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:06:22.883520 | orchestrator | 2025-09-13 01:06:22 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:06:22.884366 | orchestrator | 2025-09-13 01:06:22 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:06:22.885675 | orchestrator | 2025-09-13 01:06:22 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:06:22.885699 | orchestrator | 2025-09-13 01:06:22 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:06:25.912747 | orchestrator | 2025-09-13 01:06:25 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:06:25.912951 | orchestrator | 2025-09-13 01:06:25 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:06:25.913534 | orchestrator | 2025-09-13 01:06:25 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:06:25.914239 | orchestrator | 2025-09-13 01:06:25 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:06:25.914275 | orchestrator | 2025-09-13 01:06:25 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:06:28.940714 | orchestrator | 2025-09-13 01:06:28 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:06:28.941236 | orchestrator | 2025-09-13 01:06:28 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:06:28.941883 | orchestrator | 2025-09-13 01:06:28 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:06:28.942892 | orchestrator | 2025-09-13 01:06:28 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:06:28.942914 | orchestrator | 2025-09-13 01:06:28 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:06:31.969491 | orchestrator | 2025-09-13 01:06:31 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:06:31.969855 | orchestrator | 2025-09-13 01:06:31 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:06:31.970441 | orchestrator | 2025-09-13 01:06:31 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:06:31.971134 | orchestrator | 2025-09-13 01:06:31 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:06:31.971148 | orchestrator | 2025-09-13 01:06:31 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:06:35.002305 | orchestrator | 2025-09-13 01:06:35 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:06:35.002659 | orchestrator | 2025-09-13 01:06:35 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:06:35.003077 | orchestrator | 2025-09-13 01:06:35 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state STARTED 2025-09-13 01:06:35.003666 | orchestrator | 2025-09-13 01:06:35 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:06:35.003698 | orchestrator | 2025-09-13 01:06:35 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:06:38.042451 | orchestrator | 2025-09-13 01:06:38 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:06:38.042563 | orchestrator | 2025-09-13 01:06:38 | INFO  | Task 7d53163a-356a-4a50-82a5-d4b036da6940 is in state STARTED 2025-09-13 01:06:38.043111 | orchestrator | 2025-09-13 01:06:38 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:06:38.044260 | orchestrator | 2025-09-13 01:06:38 | INFO  | Task 21ba52ff-2ce7-4a3e-8b35-06da03b4b338 is in state SUCCESS 2025-09-13 01:06:38.045534 | orchestrator | 2025-09-13 01:06:38.045648 | orchestrator | 2025-09-13 01:06:38.045661 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-13 01:06:38.045670 | orchestrator | 2025-09-13 01:06:38.045676 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-13 01:06:38.045884 | orchestrator | Saturday 13 September 2025 01:04:28 +0000 (0:00:00.236) 0:00:00.236 **** 2025-09-13 01:06:38.045897 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:06:38.045905 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:06:38.045911 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:06:38.045918 | orchestrator | 2025-09-13 01:06:38.045924 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-13 01:06:38.045931 | orchestrator | Saturday 13 September 2025 01:04:28 +0000 (0:00:00.248) 0:00:00.485 **** 2025-09-13 01:06:38.045937 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-09-13 01:06:38.045944 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-09-13 01:06:38.045950 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-09-13 01:06:38.045957 | orchestrator | 2025-09-13 01:06:38.045964 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-09-13 01:06:38.045970 | orchestrator | 2025-09-13 01:06:38.045976 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-13 01:06:38.046003 | orchestrator | Saturday 13 September 2025 01:04:29 +0000 (0:00:00.326) 0:00:00.811 **** 2025-09-13 01:06:38.046010 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 01:06:38.046054 | orchestrator | 2025-09-13 01:06:38.046061 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-09-13 01:06:38.046067 | orchestrator | Saturday 13 September 2025 01:04:29 +0000 (0:00:00.505) 0:00:01.317 **** 2025-09-13 01:06:38.046074 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-09-13 01:06:38.046080 | orchestrator | 2025-09-13 01:06:38.046087 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-09-13 01:06:38.046093 | orchestrator | Saturday 13 September 2025 01:04:33 +0000 (0:00:03.835) 0:00:05.152 **** 2025-09-13 01:06:38.046099 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-09-13 01:06:38.046106 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-09-13 01:06:38.046112 | orchestrator | 2025-09-13 01:06:38.046129 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-09-13 01:06:38.046136 | orchestrator | Saturday 13 September 2025 01:04:39 +0000 (0:00:06.395) 0:00:11.547 **** 2025-09-13 01:06:38.046142 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-13 01:06:38.046148 | orchestrator | 2025-09-13 01:06:38.046155 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-09-13 01:06:38.046161 | orchestrator | Saturday 13 September 2025 01:04:43 +0000 (0:00:03.469) 0:00:15.017 **** 2025-09-13 01:06:38.046167 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-13 01:06:38.046174 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-09-13 01:06:38.046180 | orchestrator | 2025-09-13 01:06:38.046186 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-09-13 01:06:38.046192 | orchestrator | Saturday 13 September 2025 01:04:47 +0000 (0:00:03.744) 0:00:18.761 **** 2025-09-13 01:06:38.046198 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-13 01:06:38.046205 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-09-13 01:06:38.046211 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-09-13 01:06:38.046217 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-09-13 01:06:38.046223 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-09-13 01:06:38.046230 | orchestrator | 2025-09-13 01:06:38.046236 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-09-13 01:06:38.046242 | orchestrator | Saturday 13 September 2025 01:05:02 +0000 (0:00:15.541) 0:00:34.303 **** 2025-09-13 01:06:38.046248 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-09-13 01:06:38.046255 | orchestrator | 2025-09-13 01:06:38.046261 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-09-13 01:06:38.046267 | orchestrator | Saturday 13 September 2025 01:05:06 +0000 (0:00:03.995) 0:00:38.298 **** 2025-09-13 01:06:38.046277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-13 01:06:38.046306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-13 01:06:38.046323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-13 01:06:38.046331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-13 01:06:38.046338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-13 01:06:38.046345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-13 01:06:38.046358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:06:38.046371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:06:38.046377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:06:38.046384 | orchestrator | 2025-09-13 01:06:38.046390 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-09-13 01:06:38.046397 | orchestrator | Saturday 13 September 2025 01:05:08 +0000 (0:00:01.560) 0:00:39.858 **** 2025-09-13 01:06:38.046403 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-09-13 01:06:38.046410 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-09-13 01:06:38.046416 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-09-13 01:06:38.046422 | orchestrator | 2025-09-13 01:06:38.046435 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-09-13 01:06:38.046441 | orchestrator | Saturday 13 September 2025 01:05:09 +0000 (0:00:01.002) 0:00:40.861 **** 2025-09-13 01:06:38.046447 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:06:38.046454 | orchestrator | 2025-09-13 01:06:38.046460 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-09-13 01:06:38.046466 | orchestrator | Saturday 13 September 2025 01:05:09 +0000 (0:00:00.242) 0:00:41.104 **** 2025-09-13 01:06:38.046472 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:06:38.046478 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:06:38.046484 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:06:38.046491 | orchestrator | 2025-09-13 01:06:38.046497 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-13 01:06:38.046503 | orchestrator | Saturday 13 September 2025 01:05:10 +0000 (0:00:00.811) 0:00:41.916 **** 2025-09-13 01:06:38.046509 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 01:06:38.046515 | orchestrator | 2025-09-13 01:06:38.046521 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-09-13 01:06:38.046528 | orchestrator | Saturday 13 September 2025 01:05:10 +0000 (0:00:00.643) 0:00:42.559 **** 2025-09-13 01:06:38.046534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-13 01:06:38.046551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-13 01:06:38.046558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-13 01:06:38.046568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-13 01:06:38.046575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-13 01:06:38.046582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-13 01:06:38.046592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:06:38.046605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:06:38.046612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:06:38.046619 | orchestrator | 2025-09-13 01:06:38.046625 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-09-13 01:06:38.046631 | orchestrator | Saturday 13 September 2025 01:05:14 +0000 (0:00:03.500) 0:00:46.059 **** 2025-09-13 01:06:38.046641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-13 01:06:38.046648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-13 01:06:38.046658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-13 01:06:38.046665 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:06:38.046676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-13 01:06:38.046682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-13 01:06:38.046689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-13 01:06:38.046696 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:06:38.046705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-13 01:06:38.046712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-13 01:06:38.046723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-13 01:06:38.046730 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:06:38.046736 | orchestrator | 2025-09-13 01:06:38.046742 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-09-13 01:06:38.046749 | orchestrator | Saturday 13 September 2025 01:05:16 +0000 (0:00:02.126) 0:00:48.186 **** 2025-09-13 01:06:38.046760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-13 01:06:38.046767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-13 01:06:38.046776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-13 01:06:38.046783 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:06:38.046790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-13 01:06:38.046801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-13 01:06:38.046807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-13 01:06:38.046814 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:06:38.046825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-13 01:06:38.046847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-13 01:06:38.046857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-13 01:06:38.046867 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:06:38.046874 | orchestrator | 2025-09-13 01:06:38.046880 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-09-13 01:06:38.046886 | orchestrator | Saturday 13 September 2025 01:05:17 +0000 (0:00:01.021) 0:00:49.208 **** 2025-09-13 01:06:38.046893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-13 01:06:38.046903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-13 01:06:38.046910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-13 01:06:38.046916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-13 01:06:38.046932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-13 01:06:38.046939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-13 01:06:38.046945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:06:38.046955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:06:38.046962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:06:38.046968 | orchestrator | 2025-09-13 01:06:38.046975 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-09-13 01:06:38.046981 | orchestrator | Saturday 13 September 2025 01:05:21 +0000 (0:00:03.611) 0:00:52.820 **** 2025-09-13 01:06:38.046987 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:06:38.046993 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:06:38.046999 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:06:38.047006 | orchestrator | 2025-09-13 01:06:38.047012 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-09-13 01:06:38.047018 | orchestrator | Saturday 13 September 2025 01:05:24 +0000 (0:00:02.924) 0:00:55.745 **** 2025-09-13 01:06:38.047024 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-13 01:06:38.047030 | orchestrator | 2025-09-13 01:06:38.047037 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-09-13 01:06:38.047047 | orchestrator | Saturday 13 September 2025 01:05:26 +0000 (0:00:02.315) 0:00:58.060 **** 2025-09-13 01:06:38.047053 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:06:38.047059 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:06:38.047065 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:06:38.047072 | orchestrator | 2025-09-13 01:06:38.047078 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-09-13 01:06:38.047084 | orchestrator | Saturday 13 September 2025 01:05:27 +0000 (0:00:00.846) 0:00:58.907 **** 2025-09-13 01:06:38.047094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-13 01:06:38.047101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-13 01:06:38.047111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-13 01:06:38.047118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-13 01:06:38.047130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-13 01:06:38.047140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-13 01:06:38.047146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:06:38.047153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:06:38.047160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:06:38.047166 | orchestrator | 2025-09-13 01:06:38.047172 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-09-13 01:06:38.047179 | orchestrator | Saturday 13 September 2025 01:05:36 +0000 (0:00:09.651) 0:01:08.558 **** 2025-09-13 01:06:38.047189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-13 01:06:38.047200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-13 01:06:38.047210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-13 01:06:38.047217 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:06:38.047223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-13 01:06:38.047230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-13 01:06:38.047239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-13 01:06:38.047246 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:06:38.047256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-13 01:06:38.047266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-13 01:06:38.047273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-13 01:06:38.047280 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:06:38.047286 | orchestrator | 2025-09-13 01:06:38.047292 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-09-13 01:06:38.047299 | orchestrator | Saturday 13 September 2025 01:05:37 +0000 (0:00:00.818) 0:01:09.377 **** 2025-09-13 01:06:38.047305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-13 01:06:38.047315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-13 01:06:38.047326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-13 01:06:38.047336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-13 01:06:38.047343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-13 01:06:38.047349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-13 01:06:38.047356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:06:38.047367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:06:38.047380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:06:38.047387 | orchestrator | 2025-09-13 01:06:38.047393 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-13 01:06:38.047400 | orchestrator | Saturday 13 September 2025 01:05:41 +0000 (0:00:03.331) 0:01:12.708 **** 2025-09-13 01:06:38.047406 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:06:38.047412 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:06:38.047418 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:06:38.047425 | orchestrator | 2025-09-13 01:06:38.047431 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-09-13 01:06:38.047437 | orchestrator | Saturday 13 September 2025 01:05:41 +0000 (0:00:00.447) 0:01:13.156 **** 2025-09-13 01:06:38.047443 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:06:38.047450 | orchestrator | 2025-09-13 01:06:38.047456 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-09-13 01:06:38.047462 | orchestrator | Saturday 13 September 2025 01:05:43 +0000 (0:00:02.271) 0:01:15.427 **** 2025-09-13 01:06:38.047468 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:06:38.047474 | orchestrator | 2025-09-13 01:06:38.047484 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-09-13 01:06:38.047490 | orchestrator | Saturday 13 September 2025 01:05:46 +0000 (0:00:02.380) 0:01:17.808 **** 2025-09-13 01:06:38.047496 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:06:38.047502 | orchestrator | 2025-09-13 01:06:38.047509 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-13 01:06:38.047515 | orchestrator | Saturday 13 September 2025 01:05:57 +0000 (0:00:11.538) 0:01:29.346 **** 2025-09-13 01:06:38.047521 | orchestrator | 2025-09-13 01:06:38.047527 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-13 01:06:38.047533 | orchestrator | Saturday 13 September 2025 01:05:57 +0000 (0:00:00.136) 0:01:29.483 **** 2025-09-13 01:06:38.047540 | orchestrator | 2025-09-13 01:06:38.047546 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-13 01:06:38.047552 | orchestrator | Saturday 13 September 2025 01:05:57 +0000 (0:00:00.129) 0:01:29.612 **** 2025-09-13 01:06:38.047558 | orchestrator | 2025-09-13 01:06:38.047564 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-09-13 01:06:38.047570 | orchestrator | Saturday 13 September 2025 01:05:58 +0000 (0:00:00.101) 0:01:29.714 **** 2025-09-13 01:06:38.047577 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:06:38.047583 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:06:38.047589 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:06:38.047595 | orchestrator | 2025-09-13 01:06:38.047601 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-09-13 01:06:38.047608 | orchestrator | Saturday 13 September 2025 01:06:11 +0000 (0:00:13.127) 0:01:42.842 **** 2025-09-13 01:06:38.047614 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:06:38.047620 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:06:38.047630 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:06:38.047636 | orchestrator | 2025-09-13 01:06:38.047643 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-09-13 01:06:38.047649 | orchestrator | Saturday 13 September 2025 01:06:22 +0000 (0:00:11.442) 0:01:54.284 **** 2025-09-13 01:06:38.047655 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:06:38.047661 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:06:38.047667 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:06:38.047673 | orchestrator | 2025-09-13 01:06:38.047680 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 01:06:38.047687 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-13 01:06:38.047694 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-13 01:06:38.047700 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-13 01:06:38.047706 | orchestrator | 2025-09-13 01:06:38.047713 | orchestrator | 2025-09-13 01:06:38.047719 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 01:06:38.047725 | orchestrator | Saturday 13 September 2025 01:06:35 +0000 (0:00:12.865) 0:02:07.150 **** 2025-09-13 01:06:38.047731 | orchestrator | =============================================================================== 2025-09-13 01:06:38.047737 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.54s 2025-09-13 01:06:38.047747 | orchestrator | barbican : Restart barbican-api container ------------------------------ 13.13s 2025-09-13 01:06:38.047753 | orchestrator | barbican : Restart barbican-worker container --------------------------- 12.87s 2025-09-13 01:06:38.047759 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.54s 2025-09-13 01:06:38.047766 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 11.44s 2025-09-13 01:06:38.047772 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.65s 2025-09-13 01:06:38.047778 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.40s 2025-09-13 01:06:38.047784 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.00s 2025-09-13 01:06:38.047790 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.84s 2025-09-13 01:06:38.047797 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.74s 2025-09-13 01:06:38.047803 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.61s 2025-09-13 01:06:38.047809 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.50s 2025-09-13 01:06:38.047815 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.47s 2025-09-13 01:06:38.047821 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.33s 2025-09-13 01:06:38.047847 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.92s 2025-09-13 01:06:38.047854 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.38s 2025-09-13 01:06:38.047860 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 2.32s 2025-09-13 01:06:38.047866 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.27s 2025-09-13 01:06:38.047873 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 2.13s 2025-09-13 01:06:38.047879 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.56s 2025-09-13 01:06:38.047885 | orchestrator | 2025-09-13 01:06:38 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:06:38.047896 | orchestrator | 2025-09-13 01:06:38 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:06:41.068584 | orchestrator | 2025-09-13 01:06:41 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:06:41.069214 | orchestrator | 2025-09-13 01:06:41 | INFO  | Task 7d53163a-356a-4a50-82a5-d4b036da6940 is in state STARTED 2025-09-13 01:06:41.071015 | orchestrator | 2025-09-13 01:06:41 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:06:41.072247 | orchestrator | 2025-09-13 01:06:41 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:06:41.072270 | orchestrator | 2025-09-13 01:06:41 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:06:44.092075 | orchestrator | 2025-09-13 01:06:44 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:06:44.092283 | orchestrator | 2025-09-13 01:06:44 | INFO  | Task 7d53163a-356a-4a50-82a5-d4b036da6940 is in state STARTED 2025-09-13 01:06:44.092607 | orchestrator | 2025-09-13 01:06:44 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:06:44.094171 | orchestrator | 2025-09-13 01:06:44 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:06:44.094193 | orchestrator | 2025-09-13 01:06:44 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:06:47.121433 | orchestrator | 2025-09-13 01:06:47 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:06:47.121670 | orchestrator | 2025-09-13 01:06:47 | INFO  | Task 7d53163a-356a-4a50-82a5-d4b036da6940 is in state STARTED 2025-09-13 01:06:47.122384 | orchestrator | 2025-09-13 01:06:47 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:06:47.122897 | orchestrator | 2025-09-13 01:06:47 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:06:47.122939 | orchestrator | 2025-09-13 01:06:47 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:06:50.143794 | orchestrator | 2025-09-13 01:06:50 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:06:50.143929 | orchestrator | 2025-09-13 01:06:50 | INFO  | Task 7d53163a-356a-4a50-82a5-d4b036da6940 is in state STARTED 2025-09-13 01:06:50.144459 | orchestrator | 2025-09-13 01:06:50 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:06:50.144964 | orchestrator | 2025-09-13 01:06:50 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:06:50.144987 | orchestrator | 2025-09-13 01:06:50 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:06:53.184741 | orchestrator | 2025-09-13 01:06:53 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:06:53.185690 | orchestrator | 2025-09-13 01:06:53 | INFO  | Task 7d53163a-356a-4a50-82a5-d4b036da6940 is in state STARTED 2025-09-13 01:06:53.186685 | orchestrator | 2025-09-13 01:06:53 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:06:53.187946 | orchestrator | 2025-09-13 01:06:53 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:06:53.188127 | orchestrator | 2025-09-13 01:06:53 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:06:56.218654 | orchestrator | 2025-09-13 01:06:56 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:06:56.220217 | orchestrator | 2025-09-13 01:06:56 | INFO  | Task 7d53163a-356a-4a50-82a5-d4b036da6940 is in state STARTED 2025-09-13 01:06:56.222155 | orchestrator | 2025-09-13 01:06:56 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:06:56.224349 | orchestrator | 2025-09-13 01:06:56 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:06:56.224400 | orchestrator | 2025-09-13 01:06:56 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:06:59.260029 | orchestrator | 2025-09-13 01:06:59 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:06:59.260561 | orchestrator | 2025-09-13 01:06:59 | INFO  | Task 7d53163a-356a-4a50-82a5-d4b036da6940 is in state STARTED 2025-09-13 01:06:59.262574 | orchestrator | 2025-09-13 01:06:59 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:06:59.263501 | orchestrator | 2025-09-13 01:06:59 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:06:59.263677 | orchestrator | 2025-09-13 01:06:59 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:07:02.296805 | orchestrator | 2025-09-13 01:07:02 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:07:02.299305 | orchestrator | 2025-09-13 01:07:02 | INFO  | Task 7d53163a-356a-4a50-82a5-d4b036da6940 is in state STARTED 2025-09-13 01:07:02.301113 | orchestrator | 2025-09-13 01:07:02 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:07:02.303265 | orchestrator | 2025-09-13 01:07:02 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:07:02.303458 | orchestrator | 2025-09-13 01:07:02 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:07:05.353751 | orchestrator | 2025-09-13 01:07:05 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:07:05.354349 | orchestrator | 2025-09-13 01:07:05 | INFO  | Task 7d53163a-356a-4a50-82a5-d4b036da6940 is in state STARTED 2025-09-13 01:07:05.356465 | orchestrator | 2025-09-13 01:07:05 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:07:05.358238 | orchestrator | 2025-09-13 01:07:05 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:07:05.358306 | orchestrator | 2025-09-13 01:07:05 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:07:08.418621 | orchestrator | 2025-09-13 01:07:08 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:07:08.420622 | orchestrator | 2025-09-13 01:07:08 | INFO  | Task 7d53163a-356a-4a50-82a5-d4b036da6940 is in state STARTED 2025-09-13 01:07:08.424984 | orchestrator | 2025-09-13 01:07:08 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:07:08.426521 | orchestrator | 2025-09-13 01:07:08 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:07:08.426545 | orchestrator | 2025-09-13 01:07:08 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:07:11.468635 | orchestrator | 2025-09-13 01:07:11 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:07:11.468754 | orchestrator | 2025-09-13 01:07:11 | INFO  | Task 7d53163a-356a-4a50-82a5-d4b036da6940 is in state STARTED 2025-09-13 01:07:11.469091 | orchestrator | 2025-09-13 01:07:11 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:07:11.469887 | orchestrator | 2025-09-13 01:07:11 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:07:11.469922 | orchestrator | 2025-09-13 01:07:11 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:07:14.504083 | orchestrator | 2025-09-13 01:07:14 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:07:14.506700 | orchestrator | 2025-09-13 01:07:14 | INFO  | Task 7d53163a-356a-4a50-82a5-d4b036da6940 is in state STARTED 2025-09-13 01:07:14.508976 | orchestrator | 2025-09-13 01:07:14 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:07:14.510272 | orchestrator | 2025-09-13 01:07:14 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:07:14.510753 | orchestrator | 2025-09-13 01:07:14 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:07:17.549188 | orchestrator | 2025-09-13 01:07:17 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:07:17.549644 | orchestrator | 2025-09-13 01:07:17 | INFO  | Task 7d53163a-356a-4a50-82a5-d4b036da6940 is in state STARTED 2025-09-13 01:07:17.550647 | orchestrator | 2025-09-13 01:07:17 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:07:17.551736 | orchestrator | 2025-09-13 01:07:17 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:07:17.551949 | orchestrator | 2025-09-13 01:07:17 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:07:20.586683 | orchestrator | 2025-09-13 01:07:20 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:07:20.587101 | orchestrator | 2025-09-13 01:07:20 | INFO  | Task 7d53163a-356a-4a50-82a5-d4b036da6940 is in state STARTED 2025-09-13 01:07:20.587818 | orchestrator | 2025-09-13 01:07:20 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:07:20.588935 | orchestrator | 2025-09-13 01:07:20 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:07:20.589162 | orchestrator | 2025-09-13 01:07:20 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:07:23.622335 | orchestrator | 2025-09-13 01:07:23 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:07:23.622791 | orchestrator | 2025-09-13 01:07:23 | INFO  | Task 7d53163a-356a-4a50-82a5-d4b036da6940 is in state STARTED 2025-09-13 01:07:23.624586 | orchestrator | 2025-09-13 01:07:23 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:07:23.625361 | orchestrator | 2025-09-13 01:07:23 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:07:23.625527 | orchestrator | 2025-09-13 01:07:23 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:07:26.651047 | orchestrator | 2025-09-13 01:07:26 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:07:26.651895 | orchestrator | 2025-09-13 01:07:26 | INFO  | Task 7d53163a-356a-4a50-82a5-d4b036da6940 is in state SUCCESS 2025-09-13 01:07:26.652856 | orchestrator | 2025-09-13 01:07:26 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:07:26.654009 | orchestrator | 2025-09-13 01:07:26 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:07:26.654257 | orchestrator | 2025-09-13 01:07:26 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:07:29.689630 | orchestrator | 2025-09-13 01:07:29 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:07:29.690385 | orchestrator | 2025-09-13 01:07:29 | INFO  | Task cad66c07-b23d-4803-af92-7c378c90f2bf is in state STARTED 2025-09-13 01:07:29.692093 | orchestrator | 2025-09-13 01:07:29 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:07:29.692816 | orchestrator | 2025-09-13 01:07:29 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:07:29.692885 | orchestrator | 2025-09-13 01:07:29 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:07:32.739632 | orchestrator | 2025-09-13 01:07:32 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:07:32.740524 | orchestrator | 2025-09-13 01:07:32 | INFO  | Task cad66c07-b23d-4803-af92-7c378c90f2bf is in state STARTED 2025-09-13 01:07:32.741255 | orchestrator | 2025-09-13 01:07:32 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:07:32.742304 | orchestrator | 2025-09-13 01:07:32 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:07:32.742502 | orchestrator | 2025-09-13 01:07:32 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:07:35.795239 | orchestrator | 2025-09-13 01:07:35 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:07:35.795459 | orchestrator | 2025-09-13 01:07:35 | INFO  | Task cad66c07-b23d-4803-af92-7c378c90f2bf is in state STARTED 2025-09-13 01:07:35.798316 | orchestrator | 2025-09-13 01:07:35 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:07:35.798550 | orchestrator | 2025-09-13 01:07:35 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:07:35.798574 | orchestrator | 2025-09-13 01:07:35 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:07:38.834314 | orchestrator | 2025-09-13 01:07:38 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:07:38.836151 | orchestrator | 2025-09-13 01:07:38 | INFO  | Task cad66c07-b23d-4803-af92-7c378c90f2bf is in state STARTED 2025-09-13 01:07:38.836780 | orchestrator | 2025-09-13 01:07:38 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:07:38.837750 | orchestrator | 2025-09-13 01:07:38 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:07:38.837775 | orchestrator | 2025-09-13 01:07:38 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:07:41.867945 | orchestrator | 2025-09-13 01:07:41 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:07:41.868421 | orchestrator | 2025-09-13 01:07:41 | INFO  | Task cad66c07-b23d-4803-af92-7c378c90f2bf is in state STARTED 2025-09-13 01:07:41.870328 | orchestrator | 2025-09-13 01:07:41 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:07:41.871324 | orchestrator | 2025-09-13 01:07:41 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:07:41.871350 | orchestrator | 2025-09-13 01:07:41 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:07:44.901163 | orchestrator | 2025-09-13 01:07:44 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:07:44.905084 | orchestrator | 2025-09-13 01:07:44 | INFO  | Task cad66c07-b23d-4803-af92-7c378c90f2bf is in state STARTED 2025-09-13 01:07:44.905619 | orchestrator | 2025-09-13 01:07:44 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:07:44.906441 | orchestrator | 2025-09-13 01:07:44 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:07:44.906468 | orchestrator | 2025-09-13 01:07:44 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:07:47.927901 | orchestrator | 2025-09-13 01:07:47 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:07:47.928487 | orchestrator | 2025-09-13 01:07:47 | INFO  | Task cad66c07-b23d-4803-af92-7c378c90f2bf is in state STARTED 2025-09-13 01:07:47.929666 | orchestrator | 2025-09-13 01:07:47 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:07:47.930407 | orchestrator | 2025-09-13 01:07:47 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:07:47.930460 | orchestrator | 2025-09-13 01:07:47 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:07:50.961210 | orchestrator | 2025-09-13 01:07:50 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:07:50.961476 | orchestrator | 2025-09-13 01:07:50 | INFO  | Task cad66c07-b23d-4803-af92-7c378c90f2bf is in state STARTED 2025-09-13 01:07:50.962234 | orchestrator | 2025-09-13 01:07:50 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:07:50.962896 | orchestrator | 2025-09-13 01:07:50 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:07:50.962995 | orchestrator | 2025-09-13 01:07:50 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:07:53.997966 | orchestrator | 2025-09-13 01:07:53 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:07:53.998967 | orchestrator | 2025-09-13 01:07:53 | INFO  | Task cad66c07-b23d-4803-af92-7c378c90f2bf is in state STARTED 2025-09-13 01:07:54.000510 | orchestrator | 2025-09-13 01:07:54 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:07:54.003018 | orchestrator | 2025-09-13 01:07:54 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:07:54.003057 | orchestrator | 2025-09-13 01:07:54 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:07:57.054360 | orchestrator | 2025-09-13 01:07:57 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:07:57.054892 | orchestrator | 2025-09-13 01:07:57 | INFO  | Task cad66c07-b23d-4803-af92-7c378c90f2bf is in state STARTED 2025-09-13 01:07:57.057212 | orchestrator | 2025-09-13 01:07:57 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:07:57.058271 | orchestrator | 2025-09-13 01:07:57 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:07:57.058304 | orchestrator | 2025-09-13 01:07:57 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:08:00.096146 | orchestrator | 2025-09-13 01:08:00 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:08:00.096391 | orchestrator | 2025-09-13 01:08:00 | INFO  | Task cad66c07-b23d-4803-af92-7c378c90f2bf is in state STARTED 2025-09-13 01:08:00.096978 | orchestrator | 2025-09-13 01:08:00 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:08:00.097954 | orchestrator | 2025-09-13 01:08:00 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:08:00.097981 | orchestrator | 2025-09-13 01:08:00 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:08:03.148034 | orchestrator | 2025-09-13 01:08:03 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:08:03.150729 | orchestrator | 2025-09-13 01:08:03 | INFO  | Task cad66c07-b23d-4803-af92-7c378c90f2bf is in state STARTED 2025-09-13 01:08:03.153582 | orchestrator | 2025-09-13 01:08:03 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:08:03.156201 | orchestrator | 2025-09-13 01:08:03 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:08:03.156227 | orchestrator | 2025-09-13 01:08:03 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:08:06.189672 | orchestrator | 2025-09-13 01:08:06 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:08:06.191585 | orchestrator | 2025-09-13 01:08:06 | INFO  | Task cad66c07-b23d-4803-af92-7c378c90f2bf is in state STARTED 2025-09-13 01:08:06.193700 | orchestrator | 2025-09-13 01:08:06 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:08:06.195813 | orchestrator | 2025-09-13 01:08:06 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:08:06.195883 | orchestrator | 2025-09-13 01:08:06 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:08:09.232773 | orchestrator | 2025-09-13 01:08:09 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:08:09.233106 | orchestrator | 2025-09-13 01:08:09 | INFO  | Task cad66c07-b23d-4803-af92-7c378c90f2bf is in state STARTED 2025-09-13 01:08:09.233873 | orchestrator | 2025-09-13 01:08:09 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:08:09.236565 | orchestrator | 2025-09-13 01:08:09 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:08:09.236605 | orchestrator | 2025-09-13 01:08:09 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:08:12.276180 | orchestrator | 2025-09-13 01:08:12 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:08:12.276685 | orchestrator | 2025-09-13 01:08:12 | INFO  | Task cad66c07-b23d-4803-af92-7c378c90f2bf is in state STARTED 2025-09-13 01:08:12.277198 | orchestrator | 2025-09-13 01:08:12 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:08:12.277924 | orchestrator | 2025-09-13 01:08:12 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:08:12.277947 | orchestrator | 2025-09-13 01:08:12 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:08:15.316048 | orchestrator | 2025-09-13 01:08:15 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:08:15.317145 | orchestrator | 2025-09-13 01:08:15 | INFO  | Task cad66c07-b23d-4803-af92-7c378c90f2bf is in state STARTED 2025-09-13 01:08:15.318390 | orchestrator | 2025-09-13 01:08:15 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:08:15.319696 | orchestrator | 2025-09-13 01:08:15 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:08:15.319859 | orchestrator | 2025-09-13 01:08:15 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:08:18.342806 | orchestrator | 2025-09-13 01:08:18 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:08:18.343463 | orchestrator | 2025-09-13 01:08:18 | INFO  | Task cad66c07-b23d-4803-af92-7c378c90f2bf is in state STARTED 2025-09-13 01:08:18.345238 | orchestrator | 2025-09-13 01:08:18 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:08:18.346564 | orchestrator | 2025-09-13 01:08:18 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:08:18.346590 | orchestrator | 2025-09-13 01:08:18 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:08:21.387596 | orchestrator | 2025-09-13 01:08:21 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:08:21.389794 | orchestrator | 2025-09-13 01:08:21 | INFO  | Task cad66c07-b23d-4803-af92-7c378c90f2bf is in state STARTED 2025-09-13 01:08:21.391518 | orchestrator | 2025-09-13 01:08:21 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:08:21.393244 | orchestrator | 2025-09-13 01:08:21 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:08:21.393268 | orchestrator | 2025-09-13 01:08:21 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:08:24.447125 | orchestrator | 2025-09-13 01:08:24 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:08:24.448729 | orchestrator | 2025-09-13 01:08:24 | INFO  | Task cad66c07-b23d-4803-af92-7c378c90f2bf is in state STARTED 2025-09-13 01:08:24.450257 | orchestrator | 2025-09-13 01:08:24 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:08:24.452078 | orchestrator | 2025-09-13 01:08:24 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:08:24.452244 | orchestrator | 2025-09-13 01:08:24 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:08:27.493227 | orchestrator | 2025-09-13 01:08:27 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:08:27.495101 | orchestrator | 2025-09-13 01:08:27 | INFO  | Task cad66c07-b23d-4803-af92-7c378c90f2bf is in state STARTED 2025-09-13 01:08:27.496993 | orchestrator | 2025-09-13 01:08:27 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:08:27.498612 | orchestrator | 2025-09-13 01:08:27 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:08:27.498721 | orchestrator | 2025-09-13 01:08:27 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:08:30.548161 | orchestrator | 2025-09-13 01:08:30 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state STARTED 2025-09-13 01:08:30.548261 | orchestrator | 2025-09-13 01:08:30 | INFO  | Task cad66c07-b23d-4803-af92-7c378c90f2bf is in state STARTED 2025-09-13 01:08:30.549583 | orchestrator | 2025-09-13 01:08:30 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:08:30.550910 | orchestrator | 2025-09-13 01:08:30 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:08:30.550940 | orchestrator | 2025-09-13 01:08:30 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:08:33.597889 | orchestrator | 2025-09-13 01:08:33.598004 | orchestrator | 2025-09-13 01:08:33.598072 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-09-13 01:08:33.598089 | orchestrator | 2025-09-13 01:08:33.598101 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-09-13 01:08:33.598113 | orchestrator | Saturday 13 September 2025 01:06:43 +0000 (0:00:00.075) 0:00:00.075 **** 2025-09-13 01:08:33.598124 | orchestrator | changed: [localhost] 2025-09-13 01:08:33.598136 | orchestrator | 2025-09-13 01:08:33.598148 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-09-13 01:08:33.598159 | orchestrator | Saturday 13 September 2025 01:06:45 +0000 (0:00:01.647) 0:00:01.723 **** 2025-09-13 01:08:33.598170 | orchestrator | changed: [localhost] 2025-09-13 01:08:33.598181 | orchestrator | 2025-09-13 01:08:33.598192 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-09-13 01:08:33.598204 | orchestrator | Saturday 13 September 2025 01:07:19 +0000 (0:00:33.979) 0:00:35.702 **** 2025-09-13 01:08:33.598215 | orchestrator | changed: [localhost] 2025-09-13 01:08:33.598226 | orchestrator | 2025-09-13 01:08:33.598237 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-13 01:08:33.598248 | orchestrator | 2025-09-13 01:08:33.598259 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-13 01:08:33.598270 | orchestrator | Saturday 13 September 2025 01:07:23 +0000 (0:00:04.345) 0:00:40.048 **** 2025-09-13 01:08:33.598281 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:08:33.598292 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:08:33.598303 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:08:33.598313 | orchestrator | 2025-09-13 01:08:33.598324 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-13 01:08:33.598335 | orchestrator | Saturday 13 September 2025 01:07:24 +0000 (0:00:00.579) 0:00:40.628 **** 2025-09-13 01:08:33.598346 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-09-13 01:08:33.598389 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-09-13 01:08:33.598403 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-09-13 01:08:33.598416 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-09-13 01:08:33.598429 | orchestrator | 2025-09-13 01:08:33.598441 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-09-13 01:08:33.598454 | orchestrator | skipping: no hosts matched 2025-09-13 01:08:33.598468 | orchestrator | 2025-09-13 01:08:33.598481 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 01:08:33.598493 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 01:08:33.598508 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 01:08:33.598522 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 01:08:33.598535 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 01:08:33.598548 | orchestrator | 2025-09-13 01:08:33.598560 | orchestrator | 2025-09-13 01:08:33.598574 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 01:08:33.598587 | orchestrator | Saturday 13 September 2025 01:07:25 +0000 (0:00:00.945) 0:00:41.573 **** 2025-09-13 01:08:33.598599 | orchestrator | =============================================================================== 2025-09-13 01:08:33.598612 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 33.98s 2025-09-13 01:08:33.598625 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.35s 2025-09-13 01:08:33.598637 | orchestrator | Ensure the destination directory exists --------------------------------- 1.65s 2025-09-13 01:08:33.598649 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.95s 2025-09-13 01:08:33.598662 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.58s 2025-09-13 01:08:33.598675 | orchestrator | 2025-09-13 01:08:33.598688 | orchestrator | 2025-09-13 01:08:33 | INFO  | Task d1558316-b694-41a9-a54e-3d42927d2086 is in state SUCCESS 2025-09-13 01:08:33.633917 | orchestrator | 2025-09-13 01:08:33.633977 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-13 01:08:33.633994 | orchestrator | 2025-09-13 01:08:33.634008 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-13 01:08:33.634064 | orchestrator | Saturday 13 September 2025 01:05:20 +0000 (0:00:00.254) 0:00:00.254 **** 2025-09-13 01:08:33.634166 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:08:33.634182 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:08:33.634227 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:08:33.634238 | orchestrator | 2025-09-13 01:08:33.634334 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-13 01:08:33.634358 | orchestrator | Saturday 13 September 2025 01:05:20 +0000 (0:00:00.349) 0:00:00.604 **** 2025-09-13 01:08:33.634370 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-09-13 01:08:33.634397 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-09-13 01:08:33.634408 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-09-13 01:08:33.634429 | orchestrator | 2025-09-13 01:08:33.634441 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-09-13 01:08:33.634452 | orchestrator | 2025-09-13 01:08:33.634464 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-13 01:08:33.634523 | orchestrator | Saturday 13 September 2025 01:05:21 +0000 (0:00:00.837) 0:00:01.441 **** 2025-09-13 01:08:33.634536 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 01:08:33.634551 | orchestrator | 2025-09-13 01:08:33.634578 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-09-13 01:08:33.634615 | orchestrator | Saturday 13 September 2025 01:05:22 +0000 (0:00:01.045) 0:00:02.486 **** 2025-09-13 01:08:33.634629 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-09-13 01:08:33.634642 | orchestrator | 2025-09-13 01:08:33.634655 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-09-13 01:08:33.634667 | orchestrator | Saturday 13 September 2025 01:05:26 +0000 (0:00:03.631) 0:00:06.117 **** 2025-09-13 01:08:33.634680 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-09-13 01:08:33.634692 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-09-13 01:08:33.634706 | orchestrator | 2025-09-13 01:08:33.634718 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-09-13 01:08:33.634731 | orchestrator | Saturday 13 September 2025 01:05:33 +0000 (0:00:06.535) 0:00:12.652 **** 2025-09-13 01:08:33.634743 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-13 01:08:33.634757 | orchestrator | 2025-09-13 01:08:33.634769 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-09-13 01:08:33.634783 | orchestrator | Saturday 13 September 2025 01:05:36 +0000 (0:00:03.442) 0:00:16.095 **** 2025-09-13 01:08:33.634795 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-13 01:08:33.634808 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-09-13 01:08:33.634820 | orchestrator | 2025-09-13 01:08:33.634831 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-09-13 01:08:33.634883 | orchestrator | Saturday 13 September 2025 01:05:40 +0000 (0:00:03.743) 0:00:19.839 **** 2025-09-13 01:08:33.634895 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-13 01:08:33.634906 | orchestrator | 2025-09-13 01:08:33.634917 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-09-13 01:08:33.634927 | orchestrator | Saturday 13 September 2025 01:05:43 +0000 (0:00:03.378) 0:00:23.217 **** 2025-09-13 01:08:33.634938 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-09-13 01:08:33.634949 | orchestrator | 2025-09-13 01:08:33.634960 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-09-13 01:08:33.634970 | orchestrator | Saturday 13 September 2025 01:05:48 +0000 (0:00:04.449) 0:00:27.666 **** 2025-09-13 01:08:33.634985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-13 01:08:33.635028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-13 01:08:33.635049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-13 01:08:33.635062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-13 01:08:33.635074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-13 01:08:33.635086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-13 01:08:33.635097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.635123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.635141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.635154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.635166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.635178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.635189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.635201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.635225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.635244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.635255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.635267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.635279 | orchestrator | 2025-09-13 01:08:33.635290 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-09-13 01:08:33.635302 | orchestrator | Saturday 13 September 2025 01:05:51 +0000 (0:00:03.881) 0:00:31.548 **** 2025-09-13 01:08:33.635313 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:33.635324 | orchestrator | 2025-09-13 01:08:33.635335 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-09-13 01:08:33.635346 | orchestrator | Saturday 13 September 2025 01:05:52 +0000 (0:00:00.188) 0:00:31.736 **** 2025-09-13 01:08:33.635356 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:33.635367 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:33.635378 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:33.635389 | orchestrator | 2025-09-13 01:08:33.635400 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-13 01:08:33.635411 | orchestrator | Saturday 13 September 2025 01:05:52 +0000 (0:00:00.322) 0:00:32.059 **** 2025-09-13 01:08:33.635422 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 01:08:33.635433 | orchestrator | 2025-09-13 01:08:33.635444 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-09-13 01:08:33.635455 | orchestrator | Saturday 13 September 2025 01:05:53 +0000 (0:00:00.926) 0:00:32.985 **** 2025-09-13 01:08:33.635466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-13 01:08:33.635497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-13 01:08:33.635510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-13 01:08:33.635521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-13 01:08:33.635533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-13 01:08:33.635544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-13 01:08:33.635563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.635585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.635598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.635609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.635621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.635632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.635643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.635666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.635689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.635701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.635713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.635724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.635736 | orchestrator | 2025-09-13 01:08:33.635748 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-09-13 01:08:33.635759 | orchestrator | Saturday 13 September 2025 01:06:00 +0000 (0:00:07.387) 0:00:40.373 **** 2025-09-13 01:08:33.635771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-13 01:08:33.635788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-13 01:08:33.635811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.635823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.635834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.635881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.635893 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:33.635905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-13 01:08:33.635924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-13 01:08:33.636570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.636595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.636649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.636662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.636673 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:33.636685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-13 01:08:33.636706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-13 01:08:33.636728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.636745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.636757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.636768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.636780 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:33.636791 | orchestrator | 2025-09-13 01:08:33.636809 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-09-13 01:08:33.636820 | orchestrator | Saturday 13 September 2025 01:06:01 +0000 (0:00:00.875) 0:00:41.249 **** 2025-09-13 01:08:33.636831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-13 01:08:33.636906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-13 01:08:33.636925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.636971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.636985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.636997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.637017 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:33.637029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-13 01:08:33.637040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-13 01:08:33.637059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.637076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.637088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.637100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.637119 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:33.637131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-13 01:08:33.637143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-13 01:08:33.637154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.637174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.637186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.637197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.637213 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:33.637223 | orchestrator | 2025-09-13 01:08:33.637234 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-09-13 01:08:33.637244 | orchestrator | Saturday 13 September 2025 01:06:03 +0000 (0:00:02.172) 0:00:43.421 **** 2025-09-13 01:08:33.637255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-13 01:08:33.637265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-13 01:08:33.637287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-13 01:08:33.637298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-13 01:08:33.637309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-13 01:08:33.637329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-13 01:08:33.637340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.637351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.637366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.637382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.637393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.637409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.637420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.637431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.637442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.637457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.637472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.637483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.637499 | orchestrator | 2025-09-13 01:08:33.637510 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-09-13 01:08:33.637520 | orchestrator | Saturday 13 September 2025 01:06:10 +0000 (0:00:06.422) 0:00:49.843 **** 2025-09-13 01:08:33.637531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-13 01:08:33.637542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-13 01:08:33.637553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-13 01:08:33.637572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-13 01:08:33.637584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-13 01:08:33.637600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-13 01:08:33.637611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.637622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.637643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.637659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.637674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.637690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.637700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.637710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.637721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.637731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.637748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.637762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.637779 | orchestrator | 2025-09-13 01:08:33.637789 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-09-13 01:08:33.637799 | orchestrator | Saturday 13 September 2025 01:06:32 +0000 (0:00:22.269) 0:01:12.113 **** 2025-09-13 01:08:33.637809 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-13 01:08:33.637819 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-13 01:08:33.637829 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-13 01:08:33.637860 | orchestrator | 2025-09-13 01:08:33.637870 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-09-13 01:08:33.637880 | orchestrator | Saturday 13 September 2025 01:06:38 +0000 (0:00:06.345) 0:01:18.458 **** 2025-09-13 01:08:33.637889 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-13 01:08:33.637899 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-13 01:08:33.637909 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-13 01:08:33.637918 | orchestrator | 2025-09-13 01:08:33.637928 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-09-13 01:08:33.637937 | orchestrator | Saturday 13 September 2025 01:06:42 +0000 (0:00:03.364) 0:01:21.823 **** 2025-09-13 01:08:33.637947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-13 01:08:33.637958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-13 01:08:33.637979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-13 01:08:33.637997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-13 01:08:33.638007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.638118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.638133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-13 01:08:33.638144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.638154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.638183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.638194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.638204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-13 01:08:33.638214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.638225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.638235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.638254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.638274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.638285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.638295 | orchestrator | 2025-09-13 01:08:33.638305 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-09-13 01:08:33.638315 | orchestrator | Saturday 13 September 2025 01:06:45 +0000 (0:00:03.573) 0:01:25.396 **** 2025-09-13 01:08:33.638325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-13 01:08:33.638335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-13 01:08:33.638346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-13 01:08:33.638371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-13 01:08:33.638382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.638392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.638403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.638413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-13 01:08:33.638423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.638441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.638461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.638472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-13 01:08:33.638482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.638492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.638502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.638519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.638533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.638548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.638558 | orchestrator | 2025-09-13 01:08:33.638568 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-13 01:08:33.638578 | orchestrator | Saturday 13 September 2025 01:06:48 +0000 (0:00:03.169) 0:01:28.565 **** 2025-09-13 01:08:33.638588 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:33.638598 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:33.638608 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:33.638617 | orchestrator | 2025-09-13 01:08:33.638627 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-09-13 01:08:33.638636 | orchestrator | Saturday 13 September 2025 01:06:49 +0000 (0:00:00.450) 0:01:29.016 **** 2025-09-13 01:08:33.638647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-13 01:08:33.638657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-13 01:08:33.638673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.638684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.638705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.638716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.638726 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:33.638736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-13 01:08:33.638746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-13 01:08:33.638762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.638772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.638788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.638802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.638813 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:33.638823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-13 01:08:33.638834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-13 01:08:33.638866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.638877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.638887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.638907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-13 01:08:33.638918 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:33.638927 | orchestrator | 2025-09-13 01:08:33.638937 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-09-13 01:08:33.638947 | orchestrator | Saturday 13 September 2025 01:06:50 +0000 (0:00:01.048) 0:01:30.065 **** 2025-09-13 01:08:33.638957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-13 01:08:33.638967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-13 01:08:33.638988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-13 01:08:33.638998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-13 01:08:33.639021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-13 01:08:33.639032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-13 01:08:33.639042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.639058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.639069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.639079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.639093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.639108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.639118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.639128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.639145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.639155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.639165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.639180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:08:33.639190 | orchestrator | 2025-09-13 01:08:33.639200 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-13 01:08:33.639214 | orchestrator | Saturday 13 September 2025 01:06:54 +0000 (0:00:04.106) 0:01:34.171 **** 2025-09-13 01:08:33.639224 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:33.639234 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:33.639244 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:33.639254 | orchestrator | 2025-09-13 01:08:33.639263 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-09-13 01:08:33.639273 | orchestrator | Saturday 13 September 2025 01:06:54 +0000 (0:00:00.319) 0:01:34.490 **** 2025-09-13 01:08:33.639283 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-09-13 01:08:33.639292 | orchestrator | 2025-09-13 01:08:33.639302 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-09-13 01:08:33.639312 | orchestrator | Saturday 13 September 2025 01:06:56 +0000 (0:00:01.932) 0:01:36.422 **** 2025-09-13 01:08:33.639322 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-13 01:08:33.639337 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-09-13 01:08:33.639347 | orchestrator | 2025-09-13 01:08:33.639356 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-09-13 01:08:33.639366 | orchestrator | Saturday 13 September 2025 01:06:58 +0000 (0:00:01.866) 0:01:38.289 **** 2025-09-13 01:08:33.639375 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:08:33.639385 | orchestrator | 2025-09-13 01:08:33.639395 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-13 01:08:33.639404 | orchestrator | Saturday 13 September 2025 01:07:17 +0000 (0:00:18.429) 0:01:56.718 **** 2025-09-13 01:08:33.639414 | orchestrator | 2025-09-13 01:08:33.639424 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-13 01:08:33.639434 | orchestrator | Saturday 13 September 2025 01:07:17 +0000 (0:00:00.570) 0:01:57.290 **** 2025-09-13 01:08:33.639443 | orchestrator | 2025-09-13 01:08:33.639453 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-13 01:08:33.639463 | orchestrator | Saturday 13 September 2025 01:07:17 +0000 (0:00:00.156) 0:01:57.447 **** 2025-09-13 01:08:33.639472 | orchestrator | 2025-09-13 01:08:33.639482 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-09-13 01:08:33.639491 | orchestrator | Saturday 13 September 2025 01:07:17 +0000 (0:00:00.142) 0:01:57.590 **** 2025-09-13 01:08:33.639501 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:08:33.639511 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:08:33.639520 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:08:33.639530 | orchestrator | 2025-09-13 01:08:33.639539 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-09-13 01:08:33.639549 | orchestrator | Saturday 13 September 2025 01:07:26 +0000 (0:00:08.976) 0:02:06.566 **** 2025-09-13 01:08:33.639559 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:08:33.639569 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:08:33.639578 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:08:33.639588 | orchestrator | 2025-09-13 01:08:33.639598 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-09-13 01:08:33.639607 | orchestrator | Saturday 13 September 2025 01:07:39 +0000 (0:00:12.981) 0:02:19.548 **** 2025-09-13 01:08:33.639617 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:08:33.639627 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:08:33.639636 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:08:33.639646 | orchestrator | 2025-09-13 01:08:33.639656 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-09-13 01:08:33.639665 | orchestrator | Saturday 13 September 2025 01:07:48 +0000 (0:00:08.749) 0:02:28.297 **** 2025-09-13 01:08:33.639675 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:08:33.639685 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:08:33.639694 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:08:33.639704 | orchestrator | 2025-09-13 01:08:33.639714 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-09-13 01:08:33.639724 | orchestrator | Saturday 13 September 2025 01:08:00 +0000 (0:00:11.720) 0:02:40.018 **** 2025-09-13 01:08:33.639733 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:08:33.639743 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:08:33.639753 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:08:33.639762 | orchestrator | 2025-09-13 01:08:33.639772 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-09-13 01:08:33.639781 | orchestrator | Saturday 13 September 2025 01:08:11 +0000 (0:00:11.433) 0:02:51.451 **** 2025-09-13 01:08:33.639791 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:08:33.639801 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:08:33.639811 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:08:33.639820 | orchestrator | 2025-09-13 01:08:33.639830 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-09-13 01:08:33.639854 | orchestrator | Saturday 13 September 2025 01:08:25 +0000 (0:00:13.923) 0:03:05.375 **** 2025-09-13 01:08:33.639869 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:08:33.639879 | orchestrator | 2025-09-13 01:08:33.639889 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 01:08:33.639899 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-13 01:08:33.639909 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-13 01:08:33.639919 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-13 01:08:33.639929 | orchestrator | 2025-09-13 01:08:33.639939 | orchestrator | 2025-09-13 01:08:33.639953 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 01:08:33.639963 | orchestrator | Saturday 13 September 2025 01:08:32 +0000 (0:00:07.125) 0:03:12.500 **** 2025-09-13 01:08:33.639973 | orchestrator | =============================================================================== 2025-09-13 01:08:33.639983 | orchestrator | designate : Copying over designate.conf -------------------------------- 22.27s 2025-09-13 01:08:33.639997 | orchestrator | designate : Running Designate bootstrap container ---------------------- 18.43s 2025-09-13 01:08:33.640006 | orchestrator | designate : Restart designate-worker container ------------------------- 13.92s 2025-09-13 01:08:33.640016 | orchestrator | designate : Restart designate-api container ---------------------------- 12.98s 2025-09-13 01:08:33.640026 | orchestrator | designate : Restart designate-producer container ----------------------- 11.72s 2025-09-13 01:08:33.640035 | orchestrator | designate : Restart designate-mdns container --------------------------- 11.43s 2025-09-13 01:08:33.640045 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 8.98s 2025-09-13 01:08:33.640055 | orchestrator | designate : Restart designate-central container ------------------------- 8.75s 2025-09-13 01:08:33.640064 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.39s 2025-09-13 01:08:33.640074 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.13s 2025-09-13 01:08:33.640083 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.54s 2025-09-13 01:08:33.640093 | orchestrator | designate : Copying over config.json files for services ----------------- 6.42s 2025-09-13 01:08:33.640103 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.34s 2025-09-13 01:08:33.640112 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.45s 2025-09-13 01:08:33.640122 | orchestrator | designate : Check designate containers ---------------------------------- 4.11s 2025-09-13 01:08:33.640132 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.88s 2025-09-13 01:08:33.640141 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.74s 2025-09-13 01:08:33.640151 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.63s 2025-09-13 01:08:33.640160 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.57s 2025-09-13 01:08:33.640170 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.44s 2025-09-13 01:08:33.640180 | orchestrator | 2025-09-13 01:08:33 | INFO  | Task cad66c07-b23d-4803-af92-7c378c90f2bf is in state STARTED 2025-09-13 01:08:33.640190 | orchestrator | 2025-09-13 01:08:33 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:08:33.640200 | orchestrator | 2025-09-13 01:08:33 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:08:33.640210 | orchestrator | 2025-09-13 01:08:33 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:08:36.642437 | orchestrator | 2025-09-13 01:08:36 | INFO  | Task cad66c07-b23d-4803-af92-7c378c90f2bf is in state STARTED 2025-09-13 01:08:36.643432 | orchestrator | 2025-09-13 01:08:36 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:08:36.645247 | orchestrator | 2025-09-13 01:08:36 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:08:36.646201 | orchestrator | 2025-09-13 01:08:36 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:08:36.647499 | orchestrator | 2025-09-13 01:08:36 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:08:39.708605 | orchestrator | 2025-09-13 01:08:39 | INFO  | Task cad66c07-b23d-4803-af92-7c378c90f2bf is in state STARTED 2025-09-13 01:08:39.711267 | orchestrator | 2025-09-13 01:08:39 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:08:39.713563 | orchestrator | 2025-09-13 01:08:39 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:08:39.715571 | orchestrator | 2025-09-13 01:08:39 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state STARTED 2025-09-13 01:08:39.715794 | orchestrator | 2025-09-13 01:08:39 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:08:42.769514 | orchestrator | 2025-09-13 01:08:42 | INFO  | Task cad66c07-b23d-4803-af92-7c378c90f2bf is in state SUCCESS 2025-09-13 01:08:42.772202 | orchestrator | 2025-09-13 01:08:42.772248 | orchestrator | 2025-09-13 01:08:42.772262 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-13 01:08:42.772274 | orchestrator | 2025-09-13 01:08:42.772285 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-13 01:08:42.772297 | orchestrator | Saturday 13 September 2025 01:07:32 +0000 (0:00:00.401) 0:00:00.401 **** 2025-09-13 01:08:42.772309 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:08:42.772321 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:08:42.772332 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:08:42.772343 | orchestrator | 2025-09-13 01:08:42.772354 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-13 01:08:42.772366 | orchestrator | Saturday 13 September 2025 01:07:32 +0000 (0:00:00.665) 0:00:01.067 **** 2025-09-13 01:08:42.772378 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-09-13 01:08:42.772389 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-09-13 01:08:42.772400 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-09-13 01:08:42.772411 | orchestrator | 2025-09-13 01:08:42.772422 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-09-13 01:08:42.772433 | orchestrator | 2025-09-13 01:08:42.772444 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-13 01:08:42.772472 | orchestrator | Saturday 13 September 2025 01:07:33 +0000 (0:00:00.567) 0:00:01.634 **** 2025-09-13 01:08:42.772484 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 01:08:42.772496 | orchestrator | 2025-09-13 01:08:42.772507 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-09-13 01:08:42.772517 | orchestrator | Saturday 13 September 2025 01:07:33 +0000 (0:00:00.540) 0:00:02.175 **** 2025-09-13 01:08:42.772530 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-09-13 01:08:42.772541 | orchestrator | 2025-09-13 01:08:42.772552 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-09-13 01:08:42.772563 | orchestrator | Saturday 13 September 2025 01:07:37 +0000 (0:00:03.995) 0:00:06.170 **** 2025-09-13 01:08:42.772574 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-09-13 01:08:42.772586 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-09-13 01:08:42.772597 | orchestrator | 2025-09-13 01:08:42.772608 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-09-13 01:08:42.772642 | orchestrator | Saturday 13 September 2025 01:07:44 +0000 (0:00:06.840) 0:00:13.010 **** 2025-09-13 01:08:42.772654 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-13 01:08:42.772665 | orchestrator | 2025-09-13 01:08:42.772676 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-09-13 01:08:42.772687 | orchestrator | Saturday 13 September 2025 01:07:47 +0000 (0:00:03.310) 0:00:16.321 **** 2025-09-13 01:08:42.772698 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-13 01:08:42.772709 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-09-13 01:08:42.772720 | orchestrator | 2025-09-13 01:08:42.772730 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-09-13 01:08:42.772741 | orchestrator | Saturday 13 September 2025 01:07:51 +0000 (0:00:03.973) 0:00:20.294 **** 2025-09-13 01:08:42.772752 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-13 01:08:42.772763 | orchestrator | 2025-09-13 01:08:42.772774 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-09-13 01:08:42.772786 | orchestrator | Saturday 13 September 2025 01:07:55 +0000 (0:00:03.540) 0:00:23.835 **** 2025-09-13 01:08:42.772796 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-09-13 01:08:42.772807 | orchestrator | 2025-09-13 01:08:42.772818 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-13 01:08:42.772829 | orchestrator | Saturday 13 September 2025 01:07:59 +0000 (0:00:04.286) 0:00:28.122 **** 2025-09-13 01:08:42.772866 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.772878 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:42.772889 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:42.772900 | orchestrator | 2025-09-13 01:08:42.772911 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-09-13 01:08:42.772922 | orchestrator | Saturday 13 September 2025 01:08:00 +0000 (0:00:00.294) 0:00:28.416 **** 2025-09-13 01:08:42.772937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-13 01:08:42.772967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-13 01:08:42.772987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-13 01:08:42.773008 | orchestrator | 2025-09-13 01:08:42.773019 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-09-13 01:08:42.773030 | orchestrator | Saturday 13 September 2025 01:08:00 +0000 (0:00:00.937) 0:00:29.353 **** 2025-09-13 01:08:42.773042 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.773052 | orchestrator | 2025-09-13 01:08:42.773063 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-09-13 01:08:42.773074 | orchestrator | Saturday 13 September 2025 01:08:01 +0000 (0:00:00.124) 0:00:29.477 **** 2025-09-13 01:08:42.773085 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.773096 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:42.773106 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:42.773117 | orchestrator | 2025-09-13 01:08:42.773128 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-13 01:08:42.773139 | orchestrator | Saturday 13 September 2025 01:08:01 +0000 (0:00:00.509) 0:00:29.987 **** 2025-09-13 01:08:42.773150 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 01:08:42.773161 | orchestrator | 2025-09-13 01:08:42.773172 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-09-13 01:08:42.773183 | orchestrator | Saturday 13 September 2025 01:08:02 +0000 (0:00:00.687) 0:00:30.675 **** 2025-09-13 01:08:42.773195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-13 01:08:42.773216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-13 01:08:42.773234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-13 01:08:42.773253 | orchestrator | 2025-09-13 01:08:42.773264 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-09-13 01:08:42.773275 | orchestrator | Saturday 13 September 2025 01:08:04 +0000 (0:00:01.713) 0:00:32.388 **** 2025-09-13 01:08:42.773286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-13 01:08:42.773298 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.773310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-13 01:08:42.773321 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:42.773338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-13 01:08:42.773349 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:42.773368 | orchestrator | 2025-09-13 01:08:42.773379 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-09-13 01:08:42.773390 | orchestrator | Saturday 13 September 2025 01:08:04 +0000 (0:00:00.831) 0:00:33.220 **** 2025-09-13 01:08:42.773406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-13 01:08:42.773418 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.773430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-13 01:08:42.773441 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:42.773452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-13 01:08:42.773463 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:42.773474 | orchestrator | 2025-09-13 01:08:42.773485 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-09-13 01:08:42.773496 | orchestrator | Saturday 13 September 2025 01:08:05 +0000 (0:00:00.680) 0:00:33.900 **** 2025-09-13 01:08:42.773512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-13 01:08:42.773537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-13 01:08:42.773549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-13 01:08:42.773560 | orchestrator | 2025-09-13 01:08:42.773571 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-09-13 01:08:42.773583 | orchestrator | Saturday 13 September 2025 01:08:06 +0000 (0:00:01.345) 0:00:35.246 **** 2025-09-13 01:08:42.773594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-13 01:08:42.773606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-13 01:08:42.773631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-13 01:08:42.773643 | orchestrator | 2025-09-13 01:08:42.773654 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-09-13 01:08:42.773666 | orchestrator | Saturday 13 September 2025 01:08:10 +0000 (0:00:03.736) 0:00:38.982 **** 2025-09-13 01:08:42.773676 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-13 01:08:42.773692 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-13 01:08:42.773704 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-13 01:08:42.773715 | orchestrator | 2025-09-13 01:08:42.773726 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-09-13 01:08:42.773737 | orchestrator | Saturday 13 September 2025 01:08:13 +0000 (0:00:02.605) 0:00:41.588 **** 2025-09-13 01:08:42.773747 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:08:42.773758 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:08:42.773769 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:08:42.773780 | orchestrator | 2025-09-13 01:08:42.773791 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-09-13 01:08:42.773802 | orchestrator | Saturday 13 September 2025 01:08:15 +0000 (0:00:01.801) 0:00:43.390 **** 2025-09-13 01:08:42.773813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-13 01:08:42.773825 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.773836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-13 01:08:42.773888 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:42.773907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-13 01:08:42.773919 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:42.773930 | orchestrator | 2025-09-13 01:08:42.773940 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-09-13 01:08:42.773951 | orchestrator | Saturday 13 September 2025 01:08:16 +0000 (0:00:01.220) 0:00:44.611 **** 2025-09-13 01:08:42.773968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-13 01:08:42.773980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-13 01:08:42.773992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-13 01:08:42.774062 | orchestrator | 2025-09-13 01:08:42.774077 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-09-13 01:08:42.774088 | orchestrator | Saturday 13 September 2025 01:08:17 +0000 (0:00:01.332) 0:00:45.943 **** 2025-09-13 01:08:42.774099 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:08:42.774110 | orchestrator | 2025-09-13 01:08:42.774120 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-09-13 01:08:42.774131 | orchestrator | Saturday 13 September 2025 01:08:20 +0000 (0:00:02.628) 0:00:48.571 **** 2025-09-13 01:08:42.774142 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:08:42.774153 | orchestrator | 2025-09-13 01:08:42.774164 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-09-13 01:08:42.774175 | orchestrator | Saturday 13 September 2025 01:08:22 +0000 (0:00:02.453) 0:00:51.024 **** 2025-09-13 01:08:42.774186 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:08:42.774197 | orchestrator | 2025-09-13 01:08:42.774208 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-13 01:08:42.774218 | orchestrator | Saturday 13 September 2025 01:08:35 +0000 (0:00:12.850) 0:01:03.875 **** 2025-09-13 01:08:42.774229 | orchestrator | 2025-09-13 01:08:42.774240 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-13 01:08:42.774251 | orchestrator | Saturday 13 September 2025 01:08:35 +0000 (0:00:00.082) 0:01:03.957 **** 2025-09-13 01:08:42.774262 | orchestrator | 2025-09-13 01:08:42.774279 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-13 01:08:42.774290 | orchestrator | Saturday 13 September 2025 01:08:35 +0000 (0:00:00.076) 0:01:04.034 **** 2025-09-13 01:08:42.774301 | orchestrator | 2025-09-13 01:08:42.774312 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-09-13 01:08:42.774323 | orchestrator | Saturday 13 September 2025 01:08:35 +0000 (0:00:00.074) 0:01:04.108 **** 2025-09-13 01:08:42.774334 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:08:42.774345 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:08:42.774355 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:08:42.774366 | orchestrator | 2025-09-13 01:08:42.774377 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 01:08:42.774390 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-13 01:08:42.774403 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-13 01:08:42.774419 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-13 01:08:42.774431 | orchestrator | 2025-09-13 01:08:42.774442 | orchestrator | 2025-09-13 01:08:42.774453 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 01:08:42.774464 | orchestrator | Saturday 13 September 2025 01:08:41 +0000 (0:00:05.587) 0:01:09.696 **** 2025-09-13 01:08:42.774474 | orchestrator | =============================================================================== 2025-09-13 01:08:42.774485 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.85s 2025-09-13 01:08:42.774496 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.84s 2025-09-13 01:08:42.774507 | orchestrator | placement : Restart placement-api container ----------------------------- 5.59s 2025-09-13 01:08:42.774638 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.29s 2025-09-13 01:08:42.774656 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.00s 2025-09-13 01:08:42.774676 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.97s 2025-09-13 01:08:42.774687 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.74s 2025-09-13 01:08:42.774698 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.54s 2025-09-13 01:08:42.774709 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.31s 2025-09-13 01:08:42.774720 | orchestrator | placement : Creating placement databases -------------------------------- 2.63s 2025-09-13 01:08:42.774731 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 2.61s 2025-09-13 01:08:42.774741 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.45s 2025-09-13 01:08:42.774752 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.80s 2025-09-13 01:08:42.774763 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.71s 2025-09-13 01:08:42.774774 | orchestrator | placement : Copying over config.json files for services ----------------- 1.35s 2025-09-13 01:08:42.774784 | orchestrator | placement : Check placement containers ---------------------------------- 1.33s 2025-09-13 01:08:42.774795 | orchestrator | placement : Copying over existing policy file --------------------------- 1.22s 2025-09-13 01:08:42.774806 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.94s 2025-09-13 01:08:42.774817 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.83s 2025-09-13 01:08:42.774828 | orchestrator | placement : include_tasks ----------------------------------------------- 0.69s 2025-09-13 01:08:42.774897 | orchestrator | 2025-09-13 01:08:42 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:08:42.776302 | orchestrator | 2025-09-13 01:08:42 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:08:42.778500 | orchestrator | 2025-09-13 01:08:42 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:08:42.780953 | orchestrator | 2025-09-13 01:08:42 | INFO  | Task 18eb45b2-264f-4467-bf3e-3e62a5f7f96d is in state STARTED 2025-09-13 01:08:42.785545 | orchestrator | 2025-09-13 01:08:42 | INFO  | Task 0085c939-028b-4068-ac8d-9c47d68a712a is in state SUCCESS 2025-09-13 01:08:42.786274 | orchestrator | 2025-09-13 01:08:42 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:08:42.787522 | orchestrator | 2025-09-13 01:08:42.787561 | orchestrator | 2025-09-13 01:08:42.787573 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-13 01:08:42.787584 | orchestrator | 2025-09-13 01:08:42.787595 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-13 01:08:42.787606 | orchestrator | Saturday 13 September 2025 01:04:24 +0000 (0:00:00.264) 0:00:00.264 **** 2025-09-13 01:08:42.787617 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:08:42.787629 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:08:42.787640 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:08:42.787651 | orchestrator | ok: [testbed-node-3] 2025-09-13 01:08:42.787661 | orchestrator | ok: [testbed-node-4] 2025-09-13 01:08:42.787672 | orchestrator | ok: [testbed-node-5] 2025-09-13 01:08:42.787683 | orchestrator | 2025-09-13 01:08:42.787693 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-13 01:08:42.787704 | orchestrator | Saturday 13 September 2025 01:04:25 +0000 (0:00:00.583) 0:00:00.847 **** 2025-09-13 01:08:42.787715 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-09-13 01:08:42.787726 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-09-13 01:08:42.787737 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-09-13 01:08:42.787747 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-09-13 01:08:42.787758 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-09-13 01:08:42.787768 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-09-13 01:08:42.787799 | orchestrator | 2025-09-13 01:08:42.787810 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-09-13 01:08:42.787836 | orchestrator | 2025-09-13 01:08:42.787871 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-13 01:08:42.787882 | orchestrator | Saturday 13 September 2025 01:04:25 +0000 (0:00:00.596) 0:00:01.443 **** 2025-09-13 01:08:42.787894 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 01:08:42.788754 | orchestrator | 2025-09-13 01:08:42.788772 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-09-13 01:08:42.788796 | orchestrator | Saturday 13 September 2025 01:04:26 +0000 (0:00:01.067) 0:00:02.510 **** 2025-09-13 01:08:42.788807 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:08:42.788818 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:08:42.788829 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:08:42.788865 | orchestrator | ok: [testbed-node-3] 2025-09-13 01:08:42.788877 | orchestrator | ok: [testbed-node-4] 2025-09-13 01:08:42.788888 | orchestrator | ok: [testbed-node-5] 2025-09-13 01:08:42.788898 | orchestrator | 2025-09-13 01:08:42.788909 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-09-13 01:08:42.788920 | orchestrator | Saturday 13 September 2025 01:04:27 +0000 (0:00:01.125) 0:00:03.635 **** 2025-09-13 01:08:42.788931 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:08:42.788942 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:08:42.788952 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:08:42.788963 | orchestrator | ok: [testbed-node-3] 2025-09-13 01:08:42.788973 | orchestrator | ok: [testbed-node-4] 2025-09-13 01:08:42.788984 | orchestrator | ok: [testbed-node-5] 2025-09-13 01:08:42.788995 | orchestrator | 2025-09-13 01:08:42.789080 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-09-13 01:08:42.789094 | orchestrator | Saturday 13 September 2025 01:04:28 +0000 (0:00:00.983) 0:00:04.619 **** 2025-09-13 01:08:42.789105 | orchestrator | ok: [testbed-node-0] => { 2025-09-13 01:08:42.789116 | orchestrator |  "changed": false, 2025-09-13 01:08:42.789127 | orchestrator |  "msg": "All assertions passed" 2025-09-13 01:08:42.789139 | orchestrator | } 2025-09-13 01:08:42.789150 | orchestrator | ok: [testbed-node-1] => { 2025-09-13 01:08:42.789161 | orchestrator |  "changed": false, 2025-09-13 01:08:42.789172 | orchestrator |  "msg": "All assertions passed" 2025-09-13 01:08:42.789183 | orchestrator | } 2025-09-13 01:08:42.789194 | orchestrator | ok: [testbed-node-2] => { 2025-09-13 01:08:42.789205 | orchestrator |  "changed": false, 2025-09-13 01:08:42.789216 | orchestrator |  "msg": "All assertions passed" 2025-09-13 01:08:42.789227 | orchestrator | } 2025-09-13 01:08:42.789238 | orchestrator | ok: [testbed-node-3] => { 2025-09-13 01:08:42.789249 | orchestrator |  "changed": false, 2025-09-13 01:08:42.789260 | orchestrator |  "msg": "All assertions passed" 2025-09-13 01:08:42.789271 | orchestrator | } 2025-09-13 01:08:42.789282 | orchestrator | ok: [testbed-node-4] => { 2025-09-13 01:08:42.789293 | orchestrator |  "changed": false, 2025-09-13 01:08:42.789304 | orchestrator |  "msg": "All assertions passed" 2025-09-13 01:08:42.789315 | orchestrator | } 2025-09-13 01:08:42.789326 | orchestrator | ok: [testbed-node-5] => { 2025-09-13 01:08:42.789339 | orchestrator |  "changed": false, 2025-09-13 01:08:42.789352 | orchestrator |  "msg": "All assertions passed" 2025-09-13 01:08:42.789364 | orchestrator | } 2025-09-13 01:08:42.789391 | orchestrator | 2025-09-13 01:08:42.789403 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-09-13 01:08:42.789416 | orchestrator | Saturday 13 September 2025 01:04:29 +0000 (0:00:00.626) 0:00:05.246 **** 2025-09-13 01:08:42.789429 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.789442 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:42.789455 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:42.789467 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:08:42.789479 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:08:42.789504 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:08:42.789516 | orchestrator | 2025-09-13 01:08:42.789529 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-09-13 01:08:42.789542 | orchestrator | Saturday 13 September 2025 01:04:30 +0000 (0:00:00.570) 0:00:05.816 **** 2025-09-13 01:08:42.789555 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-09-13 01:08:42.789568 | orchestrator | 2025-09-13 01:08:42.789581 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-09-13 01:08:42.789594 | orchestrator | Saturday 13 September 2025 01:04:33 +0000 (0:00:03.383) 0:00:09.199 **** 2025-09-13 01:08:42.789607 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-09-13 01:08:42.789620 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-09-13 01:08:42.789632 | orchestrator | 2025-09-13 01:08:42.789688 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-09-13 01:08:42.789702 | orchestrator | Saturday 13 September 2025 01:04:39 +0000 (0:00:06.357) 0:00:15.556 **** 2025-09-13 01:08:42.789712 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-13 01:08:42.789723 | orchestrator | 2025-09-13 01:08:42.789734 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-09-13 01:08:42.789745 | orchestrator | Saturday 13 September 2025 01:04:43 +0000 (0:00:03.319) 0:00:18.876 **** 2025-09-13 01:08:42.789756 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-13 01:08:42.789766 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-09-13 01:08:42.789777 | orchestrator | 2025-09-13 01:08:42.789788 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-09-13 01:08:42.789798 | orchestrator | Saturday 13 September 2025 01:04:46 +0000 (0:00:03.714) 0:00:22.591 **** 2025-09-13 01:08:42.789809 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-13 01:08:42.789820 | orchestrator | 2025-09-13 01:08:42.789830 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-09-13 01:08:42.789882 | orchestrator | Saturday 13 September 2025 01:04:50 +0000 (0:00:03.256) 0:00:25.848 **** 2025-09-13 01:08:42.789894 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-09-13 01:08:42.789905 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-09-13 01:08:42.789915 | orchestrator | 2025-09-13 01:08:42.789926 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-13 01:08:42.789937 | orchestrator | Saturday 13 September 2025 01:04:57 +0000 (0:00:07.498) 0:00:33.346 **** 2025-09-13 01:08:42.789947 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.789958 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:42.789969 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:42.789980 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:08:42.789991 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:08:42.790002 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:08:42.790013 | orchestrator | 2025-09-13 01:08:42.790075 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-09-13 01:08:42.790094 | orchestrator | Saturday 13 September 2025 01:04:58 +0000 (0:00:00.651) 0:00:33.997 **** 2025-09-13 01:08:42.790106 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.790116 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:42.790127 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:42.790138 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:08:42.790148 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:08:42.790159 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:08:42.790170 | orchestrator | 2025-09-13 01:08:42.790181 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-09-13 01:08:42.790191 | orchestrator | Saturday 13 September 2025 01:04:59 +0000 (0:00:01.640) 0:00:35.637 **** 2025-09-13 01:08:42.790202 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:08:42.790223 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:08:42.790234 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:08:42.790245 | orchestrator | ok: [testbed-node-3] 2025-09-13 01:08:42.790256 | orchestrator | ok: [testbed-node-4] 2025-09-13 01:08:42.790266 | orchestrator | ok: [testbed-node-5] 2025-09-13 01:08:42.790277 | orchestrator | 2025-09-13 01:08:42.790288 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-13 01:08:42.790298 | orchestrator | Saturday 13 September 2025 01:05:00 +0000 (0:00:00.948) 0:00:36.586 **** 2025-09-13 01:08:42.790309 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:42.790320 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.790331 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:42.790341 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:08:42.790352 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:08:42.790363 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:08:42.790374 | orchestrator | 2025-09-13 01:08:42.790385 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-09-13 01:08:42.790395 | orchestrator | Saturday 13 September 2025 01:05:03 +0000 (0:00:02.434) 0:00:39.021 **** 2025-09-13 01:08:42.790410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-13 01:08:42.790461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-13 01:08:42.790475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-13 01:08:42.790493 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-13 01:08:42.790512 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-13 01:08:42.790524 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-13 01:08:42.790535 | orchestrator | 2025-09-13 01:08:42.790546 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-09-13 01:08:42.790558 | orchestrator | Saturday 13 September 2025 01:05:06 +0000 (0:00:02.992) 0:00:42.014 **** 2025-09-13 01:08:42.790569 | orchestrator | [WARNING]: Skipped 2025-09-13 01:08:42.790580 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-09-13 01:08:42.790591 | orchestrator | due to this access issue: 2025-09-13 01:08:42.790602 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-09-13 01:08:42.790613 | orchestrator | a directory 2025-09-13 01:08:42.790624 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-13 01:08:42.790635 | orchestrator | 2025-09-13 01:08:42.790646 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-13 01:08:42.790681 | orchestrator | Saturday 13 September 2025 01:05:07 +0000 (0:00:00.843) 0:00:42.857 **** 2025-09-13 01:08:42.790694 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 01:08:42.790706 | orchestrator | 2025-09-13 01:08:42.790717 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-09-13 01:08:42.790728 | orchestrator | Saturday 13 September 2025 01:05:08 +0000 (0:00:01.242) 0:00:44.100 **** 2025-09-13 01:08:42.790739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-13 01:08:42.790764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-13 01:08:42.790776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-13 01:08:42.790788 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-13 01:08:42.790825 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-13 01:08:42.790838 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-13 01:08:42.790915 | orchestrator | 2025-09-13 01:08:42.790926 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-09-13 01:08:42.790937 | orchestrator | Saturday 13 September 2025 01:05:11 +0000 (0:00:03.275) 0:00:47.375 **** 2025-09-13 01:08:42.790954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-13 01:08:42.790966 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.790978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-13 01:08:42.790990 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:42.791001 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-13 01:08:42.791013 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:08:42.791054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-13 01:08:42.791075 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:42.791097 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-13 01:08:42.791109 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:08:42.791120 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-13 01:08:42.791131 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:08:42.791142 | orchestrator | 2025-09-13 01:08:42.791154 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-09-13 01:08:42.791164 | orchestrator | Saturday 13 September 2025 01:05:14 +0000 (0:00:02.569) 0:00:49.944 **** 2025-09-13 01:08:42.791176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-13 01:08:42.791187 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.791203 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-13 01:08:42.791219 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:08:42.791229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-13 01:08:42.791240 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:42.791254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-13 01:08:42.791264 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:42.791274 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-13 01:08:42.791285 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:08:42.791294 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-13 01:08:42.791304 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:08:42.791314 | orchestrator | 2025-09-13 01:08:42.791324 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-09-13 01:08:42.791333 | orchestrator | Saturday 13 September 2025 01:05:17 +0000 (0:00:03.455) 0:00:53.400 **** 2025-09-13 01:08:42.791348 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.791358 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:42.791367 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:42.791377 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:08:42.791387 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:08:42.791396 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:08:42.791406 | orchestrator | 2025-09-13 01:08:42.791416 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-09-13 01:08:42.791432 | orchestrator | Saturday 13 September 2025 01:05:20 +0000 (0:00:02.355) 0:00:55.755 **** 2025-09-13 01:08:42.791442 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.791452 | orchestrator | 2025-09-13 01:08:42.791461 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-09-13 01:08:42.791471 | orchestrator | Saturday 13 September 2025 01:05:20 +0000 (0:00:00.104) 0:00:55.860 **** 2025-09-13 01:08:42.791481 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.791490 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:42.791500 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:42.791509 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:08:42.791519 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:08:42.791529 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:08:42.791538 | orchestrator | 2025-09-13 01:08:42.791548 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-09-13 01:08:42.791558 | orchestrator | Saturday 13 September 2025 01:05:20 +0000 (0:00:00.619) 0:00:56.480 **** 2025-09-13 01:08:42.791572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-13 01:08:42.791583 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:42.791593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-13 01:08:42.791603 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.791613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-13 01:08:42.791631 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:42.791647 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-13 01:08:42.791658 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:08:42.791668 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-13 01:08:42.791678 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:08:42.791692 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-13 01:08:42.791702 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:08:42.791712 | orchestrator | 2025-09-13 01:08:42.791722 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-09-13 01:08:42.791731 | orchestrator | Saturday 13 September 2025 01:05:23 +0000 (0:00:02.999) 0:00:59.479 **** 2025-09-13 01:08:42.791741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-13 01:08:42.791757 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-13 01:08:42.791775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-13 01:08:42.791786 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-13 01:08:42.791800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-13 01:08:42.791811 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-13 01:08:42.791827 | orchestrator | 2025-09-13 01:08:42.791837 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-09-13 01:08:42.791865 | orchestrator | Saturday 13 September 2025 01:05:27 +0000 (0:00:03.863) 0:01:03.342 **** 2025-09-13 01:08:42.791876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-13 01:08:42.791892 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-13 01:08:42.791903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-13 01:08:42.791917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-13 01:08:42.791928 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-13 01:08:42.791944 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-13 01:08:42.791954 | orchestrator | 2025-09-13 01:08:42.791964 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-09-13 01:08:42.791974 | orchestrator | Saturday 13 September 2025 01:05:34 +0000 (0:00:07.201) 0:01:10.544 **** 2025-09-13 01:08:42.791990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-13 01:08:42.792001 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.792016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-13 01:08:42.792027 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:42.792037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-13 01:08:42.792053 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:42.792063 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-13 01:08:42.792074 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-13 01:08:42.792084 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:08:42.792094 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:08:42.792109 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-13 01:08:42.792120 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:08:42.792130 | orchestrator | 2025-09-13 01:08:42.792139 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-09-13 01:08:42.792149 | orchestrator | Saturday 13 September 2025 01:05:37 +0000 (0:00:02.864) 0:01:13.409 **** 2025-09-13 01:08:42.792159 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:08:42.792168 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:08:42.792178 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:08:42.792188 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:08:42.792197 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:08:42.792207 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:08:42.792216 | orchestrator | 2025-09-13 01:08:42.792226 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-09-13 01:08:42.792235 | orchestrator | Saturday 13 September 2025 01:05:40 +0000 (0:00:03.170) 0:01:16.579 **** 2025-09-13 01:08:42.792250 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-13 01:08:42.792269 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:08:42.792279 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-13 01:08:42.792289 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:08:42.792299 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-13 01:08:42.792309 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:08:42.792326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-13 01:08:42.792337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-13 01:08:42.792358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-13 01:08:42.792369 | orchestrator | 2025-09-13 01:08:42.792378 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-09-13 01:08:42.792388 | orchestrator | Saturday 13 September 2025 01:05:45 +0000 (0:00:04.466) 0:01:21.046 **** 2025-09-13 01:08:42.792398 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:42.792408 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:42.792417 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.792427 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:08:42.792436 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:08:42.792445 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:08:42.792455 | orchestrator | 2025-09-13 01:08:42.792464 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-09-13 01:08:42.792474 | orchestrator | Saturday 13 September 2025 01:05:47 +0000 (0:00:02.464) 0:01:23.511 **** 2025-09-13 01:08:42.792484 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.792493 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:42.792503 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:42.792512 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:08:42.792522 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:08:42.792531 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:08:42.792540 | orchestrator | 2025-09-13 01:08:42.792550 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-09-13 01:08:42.792560 | orchestrator | Saturday 13 September 2025 01:05:50 +0000 (0:00:02.780) 0:01:26.291 **** 2025-09-13 01:08:42.792569 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.792579 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:42.792588 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:42.792598 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:08:42.792607 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:08:42.792617 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:08:42.792626 | orchestrator | 2025-09-13 01:08:42.792636 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-09-13 01:08:42.792646 | orchestrator | Saturday 13 September 2025 01:05:53 +0000 (0:00:02.602) 0:01:28.894 **** 2025-09-13 01:08:42.792655 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.792665 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:42.792674 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:42.792684 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:08:42.792693 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:08:42.792703 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:08:42.792712 | orchestrator | 2025-09-13 01:08:42.792722 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-09-13 01:08:42.792732 | orchestrator | Saturday 13 September 2025 01:05:56 +0000 (0:00:02.886) 0:01:31.781 **** 2025-09-13 01:08:42.792741 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.792751 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:42.792760 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:08:42.792770 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:42.792790 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:08:42.792800 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:08:42.792809 | orchestrator | 2025-09-13 01:08:42.792819 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-09-13 01:08:42.792829 | orchestrator | Saturday 13 September 2025 01:05:58 +0000 (0:00:02.319) 0:01:34.100 **** 2025-09-13 01:08:42.792853 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:08:42.792863 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:42.792872 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.792882 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:42.792891 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:08:42.792901 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:08:42.792911 | orchestrator | 2025-09-13 01:08:42.792920 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-09-13 01:08:42.792930 | orchestrator | Saturday 13 September 2025 01:06:01 +0000 (0:00:02.855) 0:01:36.956 **** 2025-09-13 01:08:42.792940 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-13 01:08:42.792950 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.792960 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-13 01:08:42.792970 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:42.792979 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-13 01:08:42.792989 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:08:42.792999 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-13 01:08:42.793009 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:42.793018 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-13 01:08:42.793028 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:08:42.793038 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-13 01:08:42.793047 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:08:42.793057 | orchestrator | 2025-09-13 01:08:42.793071 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-09-13 01:08:42.793081 | orchestrator | Saturday 13 September 2025 01:06:03 +0000 (0:00:02.656) 0:01:39.612 **** 2025-09-13 01:08:42.793091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-13 01:08:42.793101 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.793111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-13 01:08:42.793128 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:42.793143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-13 01:08:42.793154 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:42.793164 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-13 01:08:42.793174 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:08:42.793188 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-13 01:08:42.793198 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:08:42.793208 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-13 01:08:42.793218 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:08:42.793228 | orchestrator | 2025-09-13 01:08:42.793237 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-09-13 01:08:42.793247 | orchestrator | Saturday 13 September 2025 01:06:06 +0000 (0:00:02.198) 0:01:41.811 **** 2025-09-13 01:08:42.793263 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-13 01:08:42.793273 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:08:42.793290 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-13 01:08:42.793300 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:08:42.793310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-13 01:08:42.793320 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.793334 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-13 01:08:42.793344 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:08:42.793354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-13 01:08:42.793369 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:42.793379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-13 01:08:42.793389 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:42.793399 | orchestrator | 2025-09-13 01:08:42.793408 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-09-13 01:08:42.793418 | orchestrator | Saturday 13 September 2025 01:06:08 +0000 (0:00:02.473) 0:01:44.284 **** 2025-09-13 01:08:42.793427 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.793441 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:42.793451 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:42.793460 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:08:42.793470 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:08:42.793479 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:08:42.793489 | orchestrator | 2025-09-13 01:08:42.793498 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-09-13 01:08:42.793508 | orchestrator | Saturday 13 September 2025 01:06:12 +0000 (0:00:03.635) 0:01:47.920 **** 2025-09-13 01:08:42.793518 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:42.793527 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:42.793537 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.793546 | orchestrator | changed: [testbed-node-3] 2025-09-13 01:08:42.793556 | orchestrator | changed: [testbed-node-4] 2025-09-13 01:08:42.793565 | orchestrator | changed: [testbed-node-5] 2025-09-13 01:08:42.793575 | orchestrator | 2025-09-13 01:08:42.793585 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-09-13 01:08:42.793594 | orchestrator | Saturday 13 September 2025 01:06:17 +0000 (0:00:04.883) 0:01:52.803 **** 2025-09-13 01:08:42.793604 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.793614 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:42.793623 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:08:42.793633 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:42.793642 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:08:42.793651 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:08:42.793661 | orchestrator | 2025-09-13 01:08:42.793670 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-09-13 01:08:42.793680 | orchestrator | Saturday 13 September 2025 01:06:20 +0000 (0:00:03.190) 0:01:55.993 **** 2025-09-13 01:08:42.793690 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.793699 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:42.793709 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:42.793718 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:08:42.793727 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:08:42.793737 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:08:42.793746 | orchestrator | 2025-09-13 01:08:42.793756 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-09-13 01:08:42.793771 | orchestrator | Saturday 13 September 2025 01:06:24 +0000 (0:00:04.091) 0:02:00.085 **** 2025-09-13 01:08:42.793785 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:08:42.793795 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.793804 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:42.793814 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:42.793823 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:08:42.793833 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:08:42.793856 | orchestrator | 2025-09-13 01:08:42.793866 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-09-13 01:08:42.793876 | orchestrator | Saturday 13 September 2025 01:06:28 +0000 (0:00:03.596) 0:02:03.682 **** 2025-09-13 01:08:42.793885 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:08:42.793895 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.793905 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:42.793914 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:42.793924 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:08:42.793933 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:08:42.793943 | orchestrator | 2025-09-13 01:08:42.793953 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-09-13 01:08:42.793963 | orchestrator | Saturday 13 September 2025 01:06:30 +0000 (0:00:02.718) 0:02:06.400 **** 2025-09-13 01:08:42.793972 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:42.793982 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:08:42.793992 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:42.794001 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.794011 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:08:42.794046 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:08:42.794055 | orchestrator | 2025-09-13 01:08:42.794065 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-09-13 01:08:42.794075 | orchestrator | Saturday 13 September 2025 01:06:33 +0000 (0:00:02.727) 0:02:09.128 **** 2025-09-13 01:08:42.794084 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.794094 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:08:42.794103 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:42.794113 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:42.794122 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:08:42.794132 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:08:42.794142 | orchestrator | 2025-09-13 01:08:42.794151 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-09-13 01:08:42.794161 | orchestrator | Saturday 13 September 2025 01:06:36 +0000 (0:00:03.348) 0:02:12.476 **** 2025-09-13 01:08:42.794171 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:42.794180 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:42.794190 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.794199 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:08:42.794209 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:08:42.794218 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:08:42.794228 | orchestrator | 2025-09-13 01:08:42.794237 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-09-13 01:08:42.794247 | orchestrator | Saturday 13 September 2025 01:06:39 +0000 (0:00:03.079) 0:02:15.555 **** 2025-09-13 01:08:42.794257 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-13 01:08:42.794266 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.794276 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-13 01:08:42.794286 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:42.794296 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-13 01:08:42.794305 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:42.794315 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-13 01:08:42.794331 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:08:42.794346 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-13 01:08:42.794356 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:08:42.794365 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-13 01:08:42.794375 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:08:42.794384 | orchestrator | 2025-09-13 01:08:42.794394 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-09-13 01:08:42.794404 | orchestrator | Saturday 13 September 2025 01:06:43 +0000 (0:00:03.819) 0:02:19.374 **** 2025-09-13 01:08:42.794413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-13 01:08:42.794424 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:42.794438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-13 01:08:42.794449 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.794459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-13 01:08:42.794469 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:42.794479 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-13 01:08:42.794495 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:08:42.794510 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-13 01:08:42.794521 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:08:42.794530 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-13 01:08:42.794540 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:08:42.794550 | orchestrator | 2025-09-13 01:08:42.794567 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-09-13 01:08:42.794577 | orchestrator | Saturday 13 September 2025 01:06:45 +0000 (0:00:02.196) 0:02:21.570 **** 2025-09-13 01:08:42.794587 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-13 01:08:42.794597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-13 01:08:42.794618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-13 01:08:42.794629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-13 01:08:42.794643 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-13 01:08:42.794654 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-13 01:08:42.794664 | orchestrator | 2025-09-13 01:08:42.794674 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-13 01:08:42.794684 | orchestrator | Saturday 13 September 2025 01:06:50 +0000 (0:00:04.106) 0:02:25.677 **** 2025-09-13 01:08:42.794693 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:08:42.794703 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:08:42.794713 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:08:42.794722 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:08:42.794732 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:08:42.794741 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:08:42.794756 | orchestrator | 2025-09-13 01:08:42.794766 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-09-13 01:08:42.794775 | orchestrator | Saturday 13 September 2025 01:06:50 +0000 (0:00:00.634) 0:02:26.312 **** 2025-09-13 01:08:42.794785 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:08:42.794794 | orchestrator | 2025-09-13 01:08:42.794804 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-09-13 01:08:42.794813 | orchestrator | Saturday 13 September 2025 01:06:52 +0000 (0:00:02.079) 0:02:28.391 **** 2025-09-13 01:08:42.794823 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:08:42.794833 | orchestrator | 2025-09-13 01:08:42.794893 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-09-13 01:08:42.794905 | orchestrator | Saturday 13 September 2025 01:06:54 +0000 (0:00:01.970) 0:02:30.361 **** 2025-09-13 01:08:42.794914 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:08:42.794924 | orchestrator | 2025-09-13 01:08:42.794934 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-13 01:08:42.794943 | orchestrator | Saturday 13 September 2025 01:07:38 +0000 (0:00:44.059) 0:03:14.421 **** 2025-09-13 01:08:42.794953 | orchestrator | 2025-09-13 01:08:42.794962 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-13 01:08:42.794972 | orchestrator | Saturday 13 September 2025 01:07:38 +0000 (0:00:00.118) 0:03:14.539 **** 2025-09-13 01:08:42.794982 | orchestrator | 2025-09-13 01:08:42.794991 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-13 01:08:42.795001 | orchestrator | Saturday 13 September 2025 01:07:39 +0000 (0:00:00.235) 0:03:14.774 **** 2025-09-13 01:08:42.795010 | orchestrator | 2025-09-13 01:08:42.795020 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-13 01:08:42.795030 | orchestrator | Saturday 13 September 2025 01:07:39 +0000 (0:00:00.062) 0:03:14.837 **** 2025-09-13 01:08:42.795039 | orchestrator | 2025-09-13 01:08:42.795054 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-13 01:08:42.795064 | orchestrator | Saturday 13 September 2025 01:07:39 +0000 (0:00:00.069) 0:03:14.907 **** 2025-09-13 01:08:42.795074 | orchestrator | 2025-09-13 01:08:42.795083 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-13 01:08:42.795093 | orchestrator | Saturday 13 September 2025 01:07:39 +0000 (0:00:00.066) 0:03:14.974 **** 2025-09-13 01:08:42.795102 | orchestrator | 2025-09-13 01:08:42.795112 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-09-13 01:08:42.795122 | orchestrator | Saturday 13 September 2025 01:07:39 +0000 (0:00:00.066) 0:03:15.040 **** 2025-09-13 01:08:42.795131 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:08:42.795141 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:08:42.795149 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:08:42.795157 | orchestrator | 2025-09-13 01:08:42.795165 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-09-13 01:08:42.795173 | orchestrator | Saturday 13 September 2025 01:08:06 +0000 (0:00:27.326) 0:03:42.367 **** 2025-09-13 01:08:42.795181 | orchestrator | changed: [testbed-node-4] 2025-09-13 01:08:42.795189 | orchestrator | changed: [testbed-node-3] 2025-09-13 01:08:42.795197 | orchestrator | changed: [testbed-node-5] 2025-09-13 01:08:42.795205 | orchestrator | 2025-09-13 01:08:42.795213 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 01:08:42.795221 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-13 01:08:42.795229 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-13 01:08:42.795237 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-13 01:08:42.795249 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-13 01:08:42.795263 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-13 01:08:42.795271 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-13 01:08:42.795279 | orchestrator | 2025-09-13 01:08:42.795286 | orchestrator | 2025-09-13 01:08:42.795294 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 01:08:42.795303 | orchestrator | Saturday 13 September 2025 01:08:40 +0000 (0:00:33.459) 0:04:15.827 **** 2025-09-13 01:08:42.795311 | orchestrator | =============================================================================== 2025-09-13 01:08:42.795318 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 44.06s 2025-09-13 01:08:42.795326 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 33.46s 2025-09-13 01:08:42.795334 | orchestrator | neutron : Restart neutron-server container ----------------------------- 27.33s 2025-09-13 01:08:42.795342 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.50s 2025-09-13 01:08:42.795350 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.20s 2025-09-13 01:08:42.795358 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.36s 2025-09-13 01:08:42.795365 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.88s 2025-09-13 01:08:42.795373 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.47s 2025-09-13 01:08:42.795381 | orchestrator | neutron : Check neutron containers -------------------------------------- 4.11s 2025-09-13 01:08:42.795389 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 4.09s 2025-09-13 01:08:42.795397 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.86s 2025-09-13 01:08:42.795404 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 3.82s 2025-09-13 01:08:42.795412 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.71s 2025-09-13 01:08:42.795420 | orchestrator | neutron : Copying over metadata_agent.ini ------------------------------- 3.64s 2025-09-13 01:08:42.795428 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 3.60s 2025-09-13 01:08:42.795436 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.46s 2025-09-13 01:08:42.795444 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.38s 2025-09-13 01:08:42.795451 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 3.35s 2025-09-13 01:08:42.795459 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.32s 2025-09-13 01:08:42.795467 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.28s 2025-09-13 01:08:45.835296 | orchestrator | 2025-09-13 01:08:45 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:08:45.835534 | orchestrator | 2025-09-13 01:08:45 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:08:45.836754 | orchestrator | 2025-09-13 01:08:45 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:08:45.838161 | orchestrator | 2025-09-13 01:08:45 | INFO  | Task 18eb45b2-264f-4467-bf3e-3e62a5f7f96d is in state STARTED 2025-09-13 01:08:45.838498 | orchestrator | 2025-09-13 01:08:45 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:08:48.886009 | orchestrator | 2025-09-13 01:08:48 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:08:48.887598 | orchestrator | 2025-09-13 01:08:48 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:08:48.890454 | orchestrator | 2025-09-13 01:08:48 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:08:48.893305 | orchestrator | 2025-09-13 01:08:48 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:08:48.894902 | orchestrator | 2025-09-13 01:08:48 | INFO  | Task 18eb45b2-264f-4467-bf3e-3e62a5f7f96d is in state SUCCESS 2025-09-13 01:08:48.895567 | orchestrator | 2025-09-13 01:08:48 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:08:51.928464 | orchestrator | 2025-09-13 01:08:51 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:08:51.928566 | orchestrator | 2025-09-13 01:08:51 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:08:51.928841 | orchestrator | 2025-09-13 01:08:51 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:08:51.929776 | orchestrator | 2025-09-13 01:08:51 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:08:51.929799 | orchestrator | 2025-09-13 01:08:51 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:08:54.963704 | orchestrator | 2025-09-13 01:08:54 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:08:54.963940 | orchestrator | 2025-09-13 01:08:54 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:08:54.964967 | orchestrator | 2025-09-13 01:08:54 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:08:54.965977 | orchestrator | 2025-09-13 01:08:54 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:08:54.965998 | orchestrator | 2025-09-13 01:08:54 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:08:58.005826 | orchestrator | 2025-09-13 01:08:58 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:08:58.006677 | orchestrator | 2025-09-13 01:08:58 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:08:58.008853 | orchestrator | 2025-09-13 01:08:58 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:08:58.010569 | orchestrator | 2025-09-13 01:08:58 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:08:58.010597 | orchestrator | 2025-09-13 01:08:58 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:09:01.069748 | orchestrator | 2025-09-13 01:09:01 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:09:01.070287 | orchestrator | 2025-09-13 01:09:01 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:09:01.071583 | orchestrator | 2025-09-13 01:09:01 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:09:01.073208 | orchestrator | 2025-09-13 01:09:01 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:09:01.073238 | orchestrator | 2025-09-13 01:09:01 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:09:04.120832 | orchestrator | 2025-09-13 01:09:04 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:09:04.122809 | orchestrator | 2025-09-13 01:09:04 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:09:04.124960 | orchestrator | 2025-09-13 01:09:04 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:09:04.126224 | orchestrator | 2025-09-13 01:09:04 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:09:04.126283 | orchestrator | 2025-09-13 01:09:04 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:09:07.177067 | orchestrator | 2025-09-13 01:09:07 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:09:07.178811 | orchestrator | 2025-09-13 01:09:07 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:09:07.180547 | orchestrator | 2025-09-13 01:09:07 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:09:07.182255 | orchestrator | 2025-09-13 01:09:07 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:09:07.182359 | orchestrator | 2025-09-13 01:09:07 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:09:10.223365 | orchestrator | 2025-09-13 01:09:10 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:09:10.225001 | orchestrator | 2025-09-13 01:09:10 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:09:10.226466 | orchestrator | 2025-09-13 01:09:10 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:09:10.228253 | orchestrator | 2025-09-13 01:09:10 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:09:10.228274 | orchestrator | 2025-09-13 01:09:10 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:09:13.275382 | orchestrator | 2025-09-13 01:09:13 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:09:13.279795 | orchestrator | 2025-09-13 01:09:13 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:09:13.282168 | orchestrator | 2025-09-13 01:09:13 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:09:13.283957 | orchestrator | 2025-09-13 01:09:13 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:09:13.283988 | orchestrator | 2025-09-13 01:09:13 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:09:16.323223 | orchestrator | 2025-09-13 01:09:16 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:09:16.324023 | orchestrator | 2025-09-13 01:09:16 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:09:16.325694 | orchestrator | 2025-09-13 01:09:16 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:09:16.327381 | orchestrator | 2025-09-13 01:09:16 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:09:16.327410 | orchestrator | 2025-09-13 01:09:16 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:09:19.358306 | orchestrator | 2025-09-13 01:09:19 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:09:19.359794 | orchestrator | 2025-09-13 01:09:19 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:09:19.361234 | orchestrator | 2025-09-13 01:09:19 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:09:19.362380 | orchestrator | 2025-09-13 01:09:19 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:09:19.363186 | orchestrator | 2025-09-13 01:09:19 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:09:22.407871 | orchestrator | 2025-09-13 01:09:22 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:09:22.410589 | orchestrator | 2025-09-13 01:09:22 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:09:22.412376 | orchestrator | 2025-09-13 01:09:22 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:09:22.414157 | orchestrator | 2025-09-13 01:09:22 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:09:22.414351 | orchestrator | 2025-09-13 01:09:22 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:09:25.460120 | orchestrator | 2025-09-13 01:09:25 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:09:25.461257 | orchestrator | 2025-09-13 01:09:25 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:09:25.465449 | orchestrator | 2025-09-13 01:09:25 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:09:25.466096 | orchestrator | 2025-09-13 01:09:25 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:09:25.466119 | orchestrator | 2025-09-13 01:09:25 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:09:28.513551 | orchestrator | 2025-09-13 01:09:28 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:09:28.516668 | orchestrator | 2025-09-13 01:09:28 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:09:28.518318 | orchestrator | 2025-09-13 01:09:28 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:09:28.520432 | orchestrator | 2025-09-13 01:09:28 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:09:28.520456 | orchestrator | 2025-09-13 01:09:28 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:09:31.563903 | orchestrator | 2025-09-13 01:09:31 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:09:31.565852 | orchestrator | 2025-09-13 01:09:31 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:09:31.568597 | orchestrator | 2025-09-13 01:09:31 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:09:31.571800 | orchestrator | 2025-09-13 01:09:31 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:09:31.571841 | orchestrator | 2025-09-13 01:09:31 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:09:34.609167 | orchestrator | 2025-09-13 01:09:34 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:09:34.610864 | orchestrator | 2025-09-13 01:09:34 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:09:34.612462 | orchestrator | 2025-09-13 01:09:34 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:09:34.616297 | orchestrator | 2025-09-13 01:09:34 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:09:34.616323 | orchestrator | 2025-09-13 01:09:34 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:09:37.653519 | orchestrator | 2025-09-13 01:09:37 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:09:37.653912 | orchestrator | 2025-09-13 01:09:37 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:09:37.654717 | orchestrator | 2025-09-13 01:09:37 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:09:37.655431 | orchestrator | 2025-09-13 01:09:37 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:09:37.655527 | orchestrator | 2025-09-13 01:09:37 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:09:40.703219 | orchestrator | 2025-09-13 01:09:40 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:09:40.704516 | orchestrator | 2025-09-13 01:09:40 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:09:40.705534 | orchestrator | 2025-09-13 01:09:40 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:09:40.707262 | orchestrator | 2025-09-13 01:09:40 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:09:40.707285 | orchestrator | 2025-09-13 01:09:40 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:09:43.761887 | orchestrator | 2025-09-13 01:09:43 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:09:43.764307 | orchestrator | 2025-09-13 01:09:43 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:09:43.766670 | orchestrator | 2025-09-13 01:09:43 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:09:43.768003 | orchestrator | 2025-09-13 01:09:43 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:09:43.768652 | orchestrator | 2025-09-13 01:09:43 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:09:46.811363 | orchestrator | 2025-09-13 01:09:46 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:09:46.812295 | orchestrator | 2025-09-13 01:09:46 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:09:46.813869 | orchestrator | 2025-09-13 01:09:46 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:09:46.815403 | orchestrator | 2025-09-13 01:09:46 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:09:46.815440 | orchestrator | 2025-09-13 01:09:46 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:09:49.865445 | orchestrator | 2025-09-13 01:09:49 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:09:49.867691 | orchestrator | 2025-09-13 01:09:49 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:09:49.869698 | orchestrator | 2025-09-13 01:09:49 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:09:49.871756 | orchestrator | 2025-09-13 01:09:49 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:09:49.871782 | orchestrator | 2025-09-13 01:09:49 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:09:52.916005 | orchestrator | 2025-09-13 01:09:52 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:09:52.919431 | orchestrator | 2025-09-13 01:09:52 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:09:52.921542 | orchestrator | 2025-09-13 01:09:52 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:09:52.923274 | orchestrator | 2025-09-13 01:09:52 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:09:52.923298 | orchestrator | 2025-09-13 01:09:52 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:09:55.968541 | orchestrator | 2025-09-13 01:09:55 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:09:55.970583 | orchestrator | 2025-09-13 01:09:55 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:09:55.972429 | orchestrator | 2025-09-13 01:09:55 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:09:55.977179 | orchestrator | 2025-09-13 01:09:55 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:09:55.977223 | orchestrator | 2025-09-13 01:09:55 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:09:59.013753 | orchestrator | 2025-09-13 01:09:59 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:09:59.015501 | orchestrator | 2025-09-13 01:09:59 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:09:59.017459 | orchestrator | 2025-09-13 01:09:59 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:09:59.019473 | orchestrator | 2025-09-13 01:09:59 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:09:59.019497 | orchestrator | 2025-09-13 01:09:59 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:10:02.064390 | orchestrator | 2025-09-13 01:10:02 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:10:02.065521 | orchestrator | 2025-09-13 01:10:02 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:10:02.071043 | orchestrator | 2025-09-13 01:10:02 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:10:02.072553 | orchestrator | 2025-09-13 01:10:02 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:10:02.072694 | orchestrator | 2025-09-13 01:10:02 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:10:05.107818 | orchestrator | 2025-09-13 01:10:05 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:10:05.108530 | orchestrator | 2025-09-13 01:10:05 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:10:05.110174 | orchestrator | 2025-09-13 01:10:05 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:10:05.111211 | orchestrator | 2025-09-13 01:10:05 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:10:05.111247 | orchestrator | 2025-09-13 01:10:05 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:10:08.152130 | orchestrator | 2025-09-13 01:10:08 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:10:08.152763 | orchestrator | 2025-09-13 01:10:08 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:10:08.153370 | orchestrator | 2025-09-13 01:10:08 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:10:08.154382 | orchestrator | 2025-09-13 01:10:08 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:10:08.154566 | orchestrator | 2025-09-13 01:10:08 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:10:11.193319 | orchestrator | 2025-09-13 01:10:11 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:10:11.194904 | orchestrator | 2025-09-13 01:10:11 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:10:11.196555 | orchestrator | 2025-09-13 01:10:11 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:10:11.199347 | orchestrator | 2025-09-13 01:10:11 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:10:11.199436 | orchestrator | 2025-09-13 01:10:11 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:10:14.251817 | orchestrator | 2025-09-13 01:10:14 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:10:14.253898 | orchestrator | 2025-09-13 01:10:14 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:10:14.255591 | orchestrator | 2025-09-13 01:10:14 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:10:14.257072 | orchestrator | 2025-09-13 01:10:14 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:10:14.257098 | orchestrator | 2025-09-13 01:10:14 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:10:17.306488 | orchestrator | 2025-09-13 01:10:17 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:10:17.308338 | orchestrator | 2025-09-13 01:10:17 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:10:17.310353 | orchestrator | 2025-09-13 01:10:17 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:10:17.312776 | orchestrator | 2025-09-13 01:10:17 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:10:17.312895 | orchestrator | 2025-09-13 01:10:17 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:10:20.364940 | orchestrator | 2025-09-13 01:10:20 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:10:20.367750 | orchestrator | 2025-09-13 01:10:20 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:10:20.371160 | orchestrator | 2025-09-13 01:10:20 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:10:20.373825 | orchestrator | 2025-09-13 01:10:20 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:10:20.374682 | orchestrator | 2025-09-13 01:10:20 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:10:23.420685 | orchestrator | 2025-09-13 01:10:23 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:10:23.423180 | orchestrator | 2025-09-13 01:10:23 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:10:23.427136 | orchestrator | 2025-09-13 01:10:23 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:10:23.428964 | orchestrator | 2025-09-13 01:10:23 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:10:23.429009 | orchestrator | 2025-09-13 01:10:23 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:10:26.469187 | orchestrator | 2025-09-13 01:10:26 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:10:26.469845 | orchestrator | 2025-09-13 01:10:26 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:10:26.470632 | orchestrator | 2025-09-13 01:10:26 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:10:26.471511 | orchestrator | 2025-09-13 01:10:26 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:10:26.471648 | orchestrator | 2025-09-13 01:10:26 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:10:29.522544 | orchestrator | 2025-09-13 01:10:29 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:10:29.529504 | orchestrator | 2025-09-13 01:10:29 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:10:29.532866 | orchestrator | 2025-09-13 01:10:29 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:10:29.535374 | orchestrator | 2025-09-13 01:10:29 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:10:29.536192 | orchestrator | 2025-09-13 01:10:29 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:10:32.572433 | orchestrator | 2025-09-13 01:10:32 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:10:32.573088 | orchestrator | 2025-09-13 01:10:32 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:10:32.575237 | orchestrator | 2025-09-13 01:10:32 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:10:32.576538 | orchestrator | 2025-09-13 01:10:32 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:10:32.576719 | orchestrator | 2025-09-13 01:10:32 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:10:35.601679 | orchestrator | 2025-09-13 01:10:35 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:10:35.601912 | orchestrator | 2025-09-13 01:10:35 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:10:35.602974 | orchestrator | 2025-09-13 01:10:35 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:10:35.603924 | orchestrator | 2025-09-13 01:10:35 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:10:35.603950 | orchestrator | 2025-09-13 01:10:35 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:10:38.633818 | orchestrator | 2025-09-13 01:10:38 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:10:38.635890 | orchestrator | 2025-09-13 01:10:38 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:10:38.637143 | orchestrator | 2025-09-13 01:10:38 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:10:38.638122 | orchestrator | 2025-09-13 01:10:38 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state STARTED 2025-09-13 01:10:38.638407 | orchestrator | 2025-09-13 01:10:38 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:10:41.665812 | orchestrator | 2025-09-13 01:10:41 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:10:41.666809 | orchestrator | 2025-09-13 01:10:41 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:10:41.669666 | orchestrator | 2025-09-13 01:10:41 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:10:41.672840 | orchestrator | 2025-09-13 01:10:41 | INFO  | Task 1ff4ca93-701b-401e-ad9b-40c3f05b2848 is in state SUCCESS 2025-09-13 01:10:41.675548 | orchestrator | 2025-09-13 01:10:41.675579 | orchestrator | 2025-09-13 01:10:41.675592 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-13 01:10:41.675605 | orchestrator | 2025-09-13 01:10:41.675616 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-13 01:10:41.675627 | orchestrator | Saturday 13 September 2025 01:08:45 +0000 (0:00:00.196) 0:00:00.196 **** 2025-09-13 01:10:41.675639 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:10:41.675651 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:10:41.675662 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:10:41.675673 | orchestrator | 2025-09-13 01:10:41.675684 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-13 01:10:41.675695 | orchestrator | Saturday 13 September 2025 01:08:45 +0000 (0:00:00.326) 0:00:00.522 **** 2025-09-13 01:10:41.675707 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-09-13 01:10:41.675718 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-09-13 01:10:41.675729 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-09-13 01:10:41.675740 | orchestrator | 2025-09-13 01:10:41.675750 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-09-13 01:10:41.675761 | orchestrator | 2025-09-13 01:10:41.675772 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-09-13 01:10:41.675783 | orchestrator | Saturday 13 September 2025 01:08:46 +0000 (0:00:00.662) 0:00:01.185 **** 2025-09-13 01:10:41.675815 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:10:41.675827 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:10:41.675837 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:10:41.675848 | orchestrator | 2025-09-13 01:10:41.675859 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 01:10:41.675871 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 01:10:41.675884 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 01:10:41.675895 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 01:10:41.675906 | orchestrator | 2025-09-13 01:10:41.675917 | orchestrator | 2025-09-13 01:10:41.675928 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 01:10:41.675939 | orchestrator | Saturday 13 September 2025 01:08:47 +0000 (0:00:00.715) 0:00:01.901 **** 2025-09-13 01:10:41.675950 | orchestrator | =============================================================================== 2025-09-13 01:10:41.675961 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.72s 2025-09-13 01:10:41.675971 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.66s 2025-09-13 01:10:41.675982 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2025-09-13 01:10:41.676082 | orchestrator | 2025-09-13 01:10:41.676095 | orchestrator | 2025-09-13 01:10:41.676105 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-13 01:10:41.676117 | orchestrator | 2025-09-13 01:10:41.676128 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-13 01:10:41.676139 | orchestrator | Saturday 13 September 2025 01:08:38 +0000 (0:00:00.350) 0:00:00.350 **** 2025-09-13 01:10:41.676152 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:10:41.676165 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:10:41.676177 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:10:41.676189 | orchestrator | 2025-09-13 01:10:41.676201 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-13 01:10:41.676213 | orchestrator | Saturday 13 September 2025 01:08:38 +0000 (0:00:00.365) 0:00:00.715 **** 2025-09-13 01:10:41.676225 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-09-13 01:10:41.676238 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-09-13 01:10:41.676251 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-09-13 01:10:41.676263 | orchestrator | 2025-09-13 01:10:41.676276 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-09-13 01:10:41.676289 | orchestrator | 2025-09-13 01:10:41.676301 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-13 01:10:41.676313 | orchestrator | Saturday 13 September 2025 01:08:38 +0000 (0:00:00.416) 0:00:01.131 **** 2025-09-13 01:10:41.676326 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 01:10:41.676338 | orchestrator | 2025-09-13 01:10:41.676351 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-09-13 01:10:41.676363 | orchestrator | Saturday 13 September 2025 01:08:39 +0000 (0:00:00.532) 0:00:01.663 **** 2025-09-13 01:10:41.676375 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-09-13 01:10:41.676387 | orchestrator | 2025-09-13 01:10:41.676400 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-09-13 01:10:41.676412 | orchestrator | Saturday 13 September 2025 01:08:43 +0000 (0:00:03.512) 0:00:05.176 **** 2025-09-13 01:10:41.676425 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-09-13 01:10:41.676437 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-09-13 01:10:41.676450 | orchestrator | 2025-09-13 01:10:41.676472 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-09-13 01:10:41.676485 | orchestrator | Saturday 13 September 2025 01:08:49 +0000 (0:00:06.906) 0:00:12.082 **** 2025-09-13 01:10:41.676498 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-13 01:10:41.676510 | orchestrator | 2025-09-13 01:10:41.676528 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-09-13 01:10:41.676540 | orchestrator | Saturday 13 September 2025 01:08:53 +0000 (0:00:03.689) 0:00:15.772 **** 2025-09-13 01:10:41.676562 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-13 01:10:41.676574 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-09-13 01:10:41.676585 | orchestrator | 2025-09-13 01:10:41.676596 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-09-13 01:10:41.676607 | orchestrator | Saturday 13 September 2025 01:08:57 +0000 (0:00:04.326) 0:00:20.098 **** 2025-09-13 01:10:41.676618 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-13 01:10:41.676629 | orchestrator | 2025-09-13 01:10:41.676640 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-09-13 01:10:41.676650 | orchestrator | Saturday 13 September 2025 01:09:01 +0000 (0:00:03.578) 0:00:23.676 **** 2025-09-13 01:10:41.676661 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-09-13 01:10:41.676672 | orchestrator | 2025-09-13 01:10:41.676683 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-09-13 01:10:41.676693 | orchestrator | Saturday 13 September 2025 01:09:05 +0000 (0:00:04.255) 0:00:27.932 **** 2025-09-13 01:10:41.676704 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:10:41.676715 | orchestrator | 2025-09-13 01:10:41.676726 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-09-13 01:10:41.676737 | orchestrator | Saturday 13 September 2025 01:09:09 +0000 (0:00:03.244) 0:00:31.176 **** 2025-09-13 01:10:41.676748 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:10:41.676759 | orchestrator | 2025-09-13 01:10:41.676769 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-09-13 01:10:41.676780 | orchestrator | Saturday 13 September 2025 01:09:12 +0000 (0:00:03.863) 0:00:35.040 **** 2025-09-13 01:10:41.676791 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:10:41.676802 | orchestrator | 2025-09-13 01:10:41.676813 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-09-13 01:10:41.676824 | orchestrator | Saturday 13 September 2025 01:09:16 +0000 (0:00:03.553) 0:00:38.594 **** 2025-09-13 01:10:41.676838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-13 01:10:41.676854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-13 01:10:41.676878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-13 01:10:41.676898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-13 01:10:41.676911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-13 01:10:41.676923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-13 01:10:41.676934 | orchestrator | 2025-09-13 01:10:41.676945 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-09-13 01:10:41.676957 | orchestrator | Saturday 13 September 2025 01:09:17 +0000 (0:00:01.352) 0:00:39.947 **** 2025-09-13 01:10:41.676968 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:10:41.676979 | orchestrator | 2025-09-13 01:10:41.677007 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-09-13 01:10:41.677019 | orchestrator | Saturday 13 September 2025 01:09:17 +0000 (0:00:00.142) 0:00:40.089 **** 2025-09-13 01:10:41.677037 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:10:41.677048 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:10:41.677059 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:10:41.677070 | orchestrator | 2025-09-13 01:10:41.677081 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-09-13 01:10:41.677092 | orchestrator | Saturday 13 September 2025 01:09:18 +0000 (0:00:00.488) 0:00:40.578 **** 2025-09-13 01:10:41.677103 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-13 01:10:41.677113 | orchestrator | 2025-09-13 01:10:41.677124 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-09-13 01:10:41.677135 | orchestrator | Saturday 13 September 2025 01:09:19 +0000 (0:00:00.849) 0:00:41.427 **** 2025-09-13 01:10:41.677146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-13 01:10:41.677172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-13 01:10:41.677185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-13 01:10:41.677196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-13 01:10:41.677214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-13 01:10:41.677226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-13 01:10:41.677237 | orchestrator | 2025-09-13 01:10:41.677248 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-09-13 01:10:41.677259 | orchestrator | Saturday 13 September 2025 01:09:21 +0000 (0:00:02.441) 0:00:43.868 **** 2025-09-13 01:10:41.677270 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:10:41.677292 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:10:41.677303 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:10:41.677314 | orchestrator | 2025-09-13 01:10:41.677325 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-13 01:10:41.677342 | orchestrator | Saturday 13 September 2025 01:09:22 +0000 (0:00:00.296) 0:00:44.164 **** 2025-09-13 01:10:41.677353 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 01:10:41.677364 | orchestrator | 2025-09-13 01:10:41.677375 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-09-13 01:10:41.677385 | orchestrator | Saturday 13 September 2025 01:09:22 +0000 (0:00:00.722) 0:00:44.886 **** 2025-09-13 01:10:41.677397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-13 01:10:41.677409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-13 01:10:41.677428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-13 01:10:41.677439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-13 01:10:41.677463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-13 01:10:41.677475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-13 01:10:41.677487 | orchestrator | 2025-09-13 01:10:41.677497 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-09-13 01:10:41.677509 | orchestrator | Saturday 13 September 2025 01:09:25 +0000 (0:00:02.274) 0:00:47.161 **** 2025-09-13 01:10:41.677527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-13 01:10:41.677539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-13 01:10:41.677550 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:10:41.677562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-13 01:10:41.677585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-13 01:10:41.677597 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:10:41.677608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-13 01:10:41.677626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-13 01:10:41.677638 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:10:41.677649 | orchestrator | 2025-09-13 01:10:41.677659 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-09-13 01:10:41.677670 | orchestrator | Saturday 13 September 2025 01:09:25 +0000 (0:00:00.699) 0:00:47.861 **** 2025-09-13 01:10:41.677682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-13 01:10:41.677698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-13 01:10:41.677710 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:10:41.677727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-13 01:10:41.677745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-13 01:10:41.677757 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:10:41.677768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-13 01:10:41.677780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-13 01:10:41.677791 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:10:41.677802 | orchestrator | 2025-09-13 01:10:41.677813 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-09-13 01:10:41.677824 | orchestrator | Saturday 13 September 2025 01:09:26 +0000 (0:00:01.037) 0:00:48.898 **** 2025-09-13 01:10:41.678153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-13 01:10:41.678179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-13 01:10:41.678201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-13 01:10:41.678213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-13 01:10:41.678225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-13 01:10:41.678251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-13 01:10:41.678263 | orchestrator | 2025-09-13 01:10:41.678275 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-09-13 01:10:41.678286 | orchestrator | Saturday 13 September 2025 01:09:29 +0000 (0:00:02.497) 0:00:51.396 **** 2025-09-13 01:10:41.678304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-13 01:10:41.678316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-13 01:10:41.678328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-13 01:10:41.678340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-13 01:10:41.678363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-13 01:10:41.678382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-13 01:10:41.678394 | orchestrator | 2025-09-13 01:10:41.678405 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-09-13 01:10:41.678416 | orchestrator | Saturday 13 September 2025 01:09:34 +0000 (0:00:05.093) 0:00:56.489 **** 2025-09-13 01:10:41.678428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-13 01:10:41.678440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-13 01:10:41.678452 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:10:41.678464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-13 01:10:41.678482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-13 01:10:41.678500 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:10:41.678512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-13 01:10:41.678524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-13 01:10:41.678536 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:10:41.678547 | orchestrator | 2025-09-13 01:10:41.678558 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-09-13 01:10:41.678609 | orchestrator | Saturday 13 September 2025 01:09:34 +0000 (0:00:00.648) 0:00:57.138 **** 2025-09-13 01:10:41.678621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-13 01:10:41.678644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-13 01:10:41.678688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-13 01:10:41.678701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-13 01:10:41.678713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-13 01:10:41.678724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-13 01:10:41.678736 | orchestrator | 2025-09-13 01:10:41.678747 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-13 01:10:41.678758 | orchestrator | Saturday 13 September 2025 01:09:37 +0000 (0:00:02.556) 0:00:59.694 **** 2025-09-13 01:10:41.678769 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:10:41.678780 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:10:41.678797 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:10:41.678808 | orchestrator | 2025-09-13 01:10:41.678819 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-09-13 01:10:41.678830 | orchestrator | Saturday 13 September 2025 01:09:37 +0000 (0:00:00.398) 0:01:00.093 **** 2025-09-13 01:10:41.678841 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:10:41.678852 | orchestrator | 2025-09-13 01:10:41.678863 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-09-13 01:10:41.678874 | orchestrator | Saturday 13 September 2025 01:09:40 +0000 (0:00:02.254) 0:01:02.347 **** 2025-09-13 01:10:41.678884 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:10:41.678895 | orchestrator | 2025-09-13 01:10:41.678911 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-09-13 01:10:41.678922 | orchestrator | Saturday 13 September 2025 01:09:42 +0000 (0:00:02.420) 0:01:04.768 **** 2025-09-13 01:10:41.678939 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:10:41.678950 | orchestrator | 2025-09-13 01:10:41.678961 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-13 01:10:41.678972 | orchestrator | Saturday 13 September 2025 01:10:02 +0000 (0:00:19.507) 0:01:24.275 **** 2025-09-13 01:10:41.678983 | orchestrator | 2025-09-13 01:10:41.679064 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-13 01:10:41.679076 | orchestrator | Saturday 13 September 2025 01:10:02 +0000 (0:00:00.069) 0:01:24.345 **** 2025-09-13 01:10:41.679087 | orchestrator | 2025-09-13 01:10:41.679098 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-13 01:10:41.679109 | orchestrator | Saturday 13 September 2025 01:10:02 +0000 (0:00:00.076) 0:01:24.421 **** 2025-09-13 01:10:41.679119 | orchestrator | 2025-09-13 01:10:41.679129 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-09-13 01:10:41.679138 | orchestrator | Saturday 13 September 2025 01:10:02 +0000 (0:00:00.074) 0:01:24.496 **** 2025-09-13 01:10:41.679148 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:10:41.679158 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:10:41.679167 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:10:41.679177 | orchestrator | 2025-09-13 01:10:41.679186 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-09-13 01:10:41.679196 | orchestrator | Saturday 13 September 2025 01:10:25 +0000 (0:00:22.737) 0:01:47.234 **** 2025-09-13 01:10:41.679206 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:10:41.679215 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:10:41.679225 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:10:41.679234 | orchestrator | 2025-09-13 01:10:41.679244 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 01:10:41.679254 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-13 01:10:41.679264 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-13 01:10:41.679274 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-13 01:10:41.679284 | orchestrator | 2025-09-13 01:10:41.679293 | orchestrator | 2025-09-13 01:10:41.679303 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 01:10:41.679312 | orchestrator | Saturday 13 September 2025 01:10:39 +0000 (0:00:14.250) 0:02:01.484 **** 2025-09-13 01:10:41.679322 | orchestrator | =============================================================================== 2025-09-13 01:10:41.679332 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 22.74s 2025-09-13 01:10:41.679341 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 19.51s 2025-09-13 01:10:41.679351 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 14.25s 2025-09-13 01:10:41.679361 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.91s 2025-09-13 01:10:41.679377 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.09s 2025-09-13 01:10:41.679387 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.33s 2025-09-13 01:10:41.679396 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.26s 2025-09-13 01:10:41.679406 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.86s 2025-09-13 01:10:41.679415 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.69s 2025-09-13 01:10:41.679425 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.58s 2025-09-13 01:10:41.679435 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.55s 2025-09-13 01:10:41.679444 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.51s 2025-09-13 01:10:41.679454 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.24s 2025-09-13 01:10:41.679464 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.56s 2025-09-13 01:10:41.679473 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.50s 2025-09-13 01:10:41.679483 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.44s 2025-09-13 01:10:41.679492 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.42s 2025-09-13 01:10:41.679502 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.27s 2025-09-13 01:10:41.679511 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.25s 2025-09-13 01:10:41.679521 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.35s 2025-09-13 01:10:41.679531 | orchestrator | 2025-09-13 01:10:41 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:10:44.717573 | orchestrator | 2025-09-13 01:10:44 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:10:44.718664 | orchestrator | 2025-09-13 01:10:44 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:10:44.721327 | orchestrator | 2025-09-13 01:10:44 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:10:44.721386 | orchestrator | 2025-09-13 01:10:44 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:10:47.774102 | orchestrator | 2025-09-13 01:10:47 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:10:47.775920 | orchestrator | 2025-09-13 01:10:47 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:10:47.778010 | orchestrator | 2025-09-13 01:10:47 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:10:47.778284 | orchestrator | 2025-09-13 01:10:47 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:10:50.823360 | orchestrator | 2025-09-13 01:10:50 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:10:50.825182 | orchestrator | 2025-09-13 01:10:50 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:10:50.826847 | orchestrator | 2025-09-13 01:10:50 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:10:50.826875 | orchestrator | 2025-09-13 01:10:50 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:10:53.878273 | orchestrator | 2025-09-13 01:10:53 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:10:53.880377 | orchestrator | 2025-09-13 01:10:53 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:10:53.883078 | orchestrator | 2025-09-13 01:10:53 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:10:53.883139 | orchestrator | 2025-09-13 01:10:53 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:10:56.930554 | orchestrator | 2025-09-13 01:10:56 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:10:56.932257 | orchestrator | 2025-09-13 01:10:56 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state STARTED 2025-09-13 01:10:56.936075 | orchestrator | 2025-09-13 01:10:56 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:10:56.936109 | orchestrator | 2025-09-13 01:10:56 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:10:59.979439 | orchestrator | 2025-09-13 01:10:59 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:10:59.983884 | orchestrator | 2025-09-13 01:10:59 | INFO  | Task 7babd38a-07de-4c18-b03d-658d40216760 is in state SUCCESS 2025-09-13 01:10:59.985797 | orchestrator | 2025-09-13 01:10:59.985829 | orchestrator | 2025-09-13 01:10:59.985841 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-13 01:10:59.985854 | orchestrator | 2025-09-13 01:10:59.985865 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-13 01:10:59.985876 | orchestrator | Saturday 13 September 2025 01:08:44 +0000 (0:00:00.275) 0:00:00.275 **** 2025-09-13 01:10:59.985986 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:10:59.986107 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:10:59.986124 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:10:59.986135 | orchestrator | 2025-09-13 01:10:59.986146 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-13 01:10:59.986157 | orchestrator | Saturday 13 September 2025 01:08:45 +0000 (0:00:00.304) 0:00:00.579 **** 2025-09-13 01:10:59.986169 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-09-13 01:10:59.986265 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-09-13 01:10:59.986281 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-09-13 01:10:59.986292 | orchestrator | 2025-09-13 01:10:59.986303 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-09-13 01:10:59.986314 | orchestrator | 2025-09-13 01:10:59.986325 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-13 01:10:59.986336 | orchestrator | Saturday 13 September 2025 01:08:45 +0000 (0:00:00.431) 0:00:01.011 **** 2025-09-13 01:10:59.986347 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 01:10:59.986358 | orchestrator | 2025-09-13 01:10:59.986370 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-09-13 01:10:59.986381 | orchestrator | Saturday 13 September 2025 01:08:46 +0000 (0:00:00.531) 0:00:01.543 **** 2025-09-13 01:10:59.986396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-13 01:10:59.986427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-13 01:10:59.986463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-13 01:10:59.986477 | orchestrator | 2025-09-13 01:10:59.986489 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-09-13 01:10:59.986502 | orchestrator | Saturday 13 September 2025 01:08:46 +0000 (0:00:00.887) 0:00:02.430 **** 2025-09-13 01:10:59.986515 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-09-13 01:10:59.986528 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-09-13 01:10:59.986541 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-13 01:10:59.986553 | orchestrator | 2025-09-13 01:10:59.986566 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-13 01:10:59.986578 | orchestrator | Saturday 13 September 2025 01:08:47 +0000 (0:00:00.830) 0:00:03.261 **** 2025-09-13 01:10:59.986590 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 01:10:59.986603 | orchestrator | 2025-09-13 01:10:59.986615 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-09-13 01:10:59.986628 | orchestrator | Saturday 13 September 2025 01:08:48 +0000 (0:00:00.679) 0:00:03.940 **** 2025-09-13 01:10:59.986656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-13 01:10:59.986671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-13 01:10:59.986684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-13 01:10:59.986705 | orchestrator | 2025-09-13 01:10:59.987358 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-09-13 01:10:59.987371 | orchestrator | Saturday 13 September 2025 01:08:49 +0000 (0:00:01.466) 0:00:05.407 **** 2025-09-13 01:10:59.987384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-13 01:10:59.987396 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:10:59.987408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-13 01:10:59.987420 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:10:59.987469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-13 01:10:59.987483 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:10:59.987495 | orchestrator | 2025-09-13 01:10:59.987506 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-09-13 01:10:59.987517 | orchestrator | Saturday 13 September 2025 01:08:50 +0000 (0:00:00.384) 0:00:05.791 **** 2025-09-13 01:10:59.987529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-13 01:10:59.987541 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:10:59.987552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-13 01:10:59.987575 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:10:59.987592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-13 01:10:59.987604 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:10:59.987615 | orchestrator | 2025-09-13 01:10:59.987626 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-09-13 01:10:59.987638 | orchestrator | Saturday 13 September 2025 01:08:51 +0000 (0:00:01.315) 0:00:07.106 **** 2025-09-13 01:10:59.987649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-13 01:10:59.987662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-13 01:10:59.987705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-13 01:10:59.987742 | orchestrator | 2025-09-13 01:10:59.987753 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-09-13 01:10:59.987764 | orchestrator | Saturday 13 September 2025 01:08:53 +0000 (0:00:01.746) 0:00:08.853 **** 2025-09-13 01:10:59.987775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-13 01:10:59.987799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-13 01:10:59.987811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-13 01:10:59.987824 | orchestrator | 2025-09-13 01:10:59.987836 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-09-13 01:10:59.987847 | orchestrator | Saturday 13 September 2025 01:08:54 +0000 (0:00:01.567) 0:00:10.421 **** 2025-09-13 01:10:59.987859 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:10:59.987869 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:10:59.987881 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:10:59.987892 | orchestrator | 2025-09-13 01:10:59.987903 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-09-13 01:10:59.987914 | orchestrator | Saturday 13 September 2025 01:08:55 +0000 (0:00:00.628) 0:00:11.049 **** 2025-09-13 01:10:59.987925 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-13 01:10:59.987939 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-13 01:10:59.987952 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-13 01:10:59.987964 | orchestrator | 2025-09-13 01:10:59.987977 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-09-13 01:10:59.987989 | orchestrator | Saturday 13 September 2025 01:08:56 +0000 (0:00:01.318) 0:00:12.368 **** 2025-09-13 01:10:59.988023 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-13 01:10:59.988036 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-13 01:10:59.988048 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-13 01:10:59.988060 | orchestrator | 2025-09-13 01:10:59.988072 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-09-13 01:10:59.988084 | orchestrator | Saturday 13 September 2025 01:08:58 +0000 (0:00:01.338) 0:00:13.707 **** 2025-09-13 01:10:59.988129 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-13 01:10:59.988143 | orchestrator | 2025-09-13 01:10:59.988156 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-09-13 01:10:59.988168 | orchestrator | Saturday 13 September 2025 01:08:58 +0000 (0:00:00.805) 0:00:14.512 **** 2025-09-13 01:10:59.988180 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-09-13 01:10:59.988200 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-09-13 01:10:59.988213 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:10:59.988226 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:10:59.988238 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:10:59.988251 | orchestrator | 2025-09-13 01:10:59.988263 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-09-13 01:10:59.988275 | orchestrator | Saturday 13 September 2025 01:08:59 +0000 (0:00:00.737) 0:00:15.249 **** 2025-09-13 01:10:59.988287 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:10:59.988298 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:10:59.988309 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:10:59.988320 | orchestrator | 2025-09-13 01:10:59.988330 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-09-13 01:10:59.988341 | orchestrator | Saturday 13 September 2025 01:09:00 +0000 (0:00:00.602) 0:00:15.851 **** 2025-09-13 01:10:59.988353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1845541, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.2988238, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.988372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1845541, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.2988238, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.988384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1845541, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.2988238, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.988396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1845557, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3087535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.988435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1845557, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3087535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.988455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1845557, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3087535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.988466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1845544, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3006887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.988477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1845544, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3006887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.988494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1845544, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3006887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.988505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1845558, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.309824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.988517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1845558, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.309824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.988563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1845558, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.309824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.988576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1845549, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3042006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.988587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1845549, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3042006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.988603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1845549, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3042006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.988615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1845554, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3073792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.988627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1845554, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3073792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.988677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1845554, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3073792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.988690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1845540, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.2979128, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.988702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1845540, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.2979128, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.988719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1845540, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.2979128, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.988730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1845542, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.2988238, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.988741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1845542, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.2988238, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.988759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1845542, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.2988238, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.988797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1845545, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3012521, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.988810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1845545, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3012521, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.988821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1845545, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3012521, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.988837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1845551, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3059964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.988849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1845551, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3059964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.988867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1845551, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3059964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.988907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1845556, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.308435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.988920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1845556, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.308435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.988931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1845556, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.308435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.988948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1845543, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3001163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.988960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1845543, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3001163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.988971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1845543, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3001163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.988993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1845553, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3070602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1845553, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3070602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1845553, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3070602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1845550, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3049471, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1845550, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3049471, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1845550, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3049471, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1845548, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.303231, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1845548, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.303231, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1845548, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.303231, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1845547, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3018239, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1845547, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3018239, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1845547, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3018239, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1845552, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3063154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1845552, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3063154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1845552, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3063154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1845546, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3018239, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1845546, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3018239, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1845546, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3018239, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1845555, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3080387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1845555, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3080387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1845555, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3080387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1845581, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3314974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1845581, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3314974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1845581, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3314974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1845566, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3178241, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1845566, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3178241, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1845566, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3178241, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1845563, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.312824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1845563, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.312824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1845563, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.312824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1845570, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3209355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1845570, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3209355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1845570, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3209355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1845560, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3109946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1845560, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3109946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1845560, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3109946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1845574, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3271465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1845574, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3271465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1845574, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3271465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1845571, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.324824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1845571, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.324824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1845571, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.324824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1845575, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3277733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1845575, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3277733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1845575, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3277733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1845579, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3308241, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1845579, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3308241, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1845579, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3308241, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1845573, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3258243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1845573, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3258243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1845573, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3258243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1845568, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3201258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1845568, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3201258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1845568, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3201258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1845565, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.314824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1845565, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.314824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1845565, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.314824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1845567, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.318824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1845567, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.318824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1845567, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.318824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1845564, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3139427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1845564, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3139427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1845564, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3139427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1845569, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3206618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1845569, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3206618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1845569, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3206618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.989995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1845578, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3298242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.990058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1845578, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3298242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.990070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1845578, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3298242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.990088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1845577, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3288243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.990100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1845577, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3288243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.990118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1845577, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3288243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.990139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1845561, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3115976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.990151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1845561, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3115976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.990162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1845561, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3115976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.990178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1845562, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.311824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.990190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1845562, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.311824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.990208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1845562, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.311824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.990224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1845572, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3258243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.990237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1845572, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3258243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.990248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1845576, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3282228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.990260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1845572, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3258243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.990277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1845576, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3282228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.990297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1845576, 'dev': 105, 'nlink': 1, 'atime': 1757721739.0, 'mtime': 1757721739.0, 'ctime': 1757722819.3282228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-13 01:10:59.990308 | orchestrator | 2025-09-13 01:10:59.990320 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-09-13 01:10:59.990331 | orchestrator | Saturday 13 September 2025 01:09:38 +0000 (0:00:38.603) 0:00:54.454 **** 2025-09-13 01:10:59.990347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-13 01:10:59.990359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-13 01:10:59.990370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-13 01:10:59.990382 | orchestrator | 2025-09-13 01:10:59.990393 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-09-13 01:10:59.990404 | orchestrator | Saturday 13 September 2025 01:09:40 +0000 (0:00:01.097) 0:00:55.552 **** 2025-09-13 01:10:59.990415 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:10:59.990426 | orchestrator | 2025-09-13 01:10:59.990437 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-09-13 01:10:59.990448 | orchestrator | Saturday 13 September 2025 01:09:42 +0000 (0:00:02.407) 0:00:57.959 **** 2025-09-13 01:10:59.990459 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:10:59.990470 | orchestrator | 2025-09-13 01:10:59.990481 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-13 01:10:59.990492 | orchestrator | Saturday 13 September 2025 01:09:44 +0000 (0:00:02.272) 0:01:00.231 **** 2025-09-13 01:10:59.990503 | orchestrator | 2025-09-13 01:10:59.990513 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-13 01:10:59.990536 | orchestrator | Saturday 13 September 2025 01:09:44 +0000 (0:00:00.074) 0:01:00.306 **** 2025-09-13 01:10:59.990547 | orchestrator | 2025-09-13 01:10:59.990558 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-13 01:10:59.990569 | orchestrator | Saturday 13 September 2025 01:09:44 +0000 (0:00:00.080) 0:01:00.386 **** 2025-09-13 01:10:59.990580 | orchestrator | 2025-09-13 01:10:59.990590 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-09-13 01:10:59.990601 | orchestrator | Saturday 13 September 2025 01:09:45 +0000 (0:00:00.240) 0:01:00.626 **** 2025-09-13 01:10:59.990612 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:10:59.990623 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:10:59.990634 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:10:59.990645 | orchestrator | 2025-09-13 01:10:59.990656 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-09-13 01:10:59.990666 | orchestrator | Saturday 13 September 2025 01:09:46 +0000 (0:00:01.764) 0:01:02.390 **** 2025-09-13 01:10:59.990677 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:10:59.990688 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:10:59.990699 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-09-13 01:10:59.990710 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-09-13 01:10:59.990722 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-09-13 01:10:59.990733 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:10:59.990744 | orchestrator | 2025-09-13 01:10:59.990755 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-09-13 01:10:59.990766 | orchestrator | Saturday 13 September 2025 01:10:25 +0000 (0:00:38.483) 0:01:40.874 **** 2025-09-13 01:10:59.990776 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:10:59.990787 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:10:59.990798 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:10:59.990809 | orchestrator | 2025-09-13 01:10:59.990820 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-09-13 01:10:59.990831 | orchestrator | Saturday 13 September 2025 01:10:53 +0000 (0:00:28.025) 0:02:08.900 **** 2025-09-13 01:10:59.990842 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:10:59.990852 | orchestrator | 2025-09-13 01:10:59.990863 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-09-13 01:10:59.990874 | orchestrator | Saturday 13 September 2025 01:10:55 +0000 (0:00:02.237) 0:02:11.137 **** 2025-09-13 01:10:59.990885 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:10:59.990896 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:10:59.990907 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:10:59.990918 | orchestrator | 2025-09-13 01:10:59.990929 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-09-13 01:10:59.990944 | orchestrator | Saturday 13 September 2025 01:10:56 +0000 (0:00:00.512) 0:02:11.650 **** 2025-09-13 01:10:59.990957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-09-13 01:10:59.990972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-09-13 01:10:59.990984 | orchestrator | 2025-09-13 01:10:59.990995 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-09-13 01:10:59.991057 | orchestrator | Saturday 13 September 2025 01:10:58 +0000 (0:00:02.306) 0:02:13.956 **** 2025-09-13 01:10:59.991076 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:10:59.991087 | orchestrator | 2025-09-13 01:10:59.991098 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 01:10:59.991110 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-13 01:10:59.991121 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-13 01:10:59.991133 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-13 01:10:59.991143 | orchestrator | 2025-09-13 01:10:59.991154 | orchestrator | 2025-09-13 01:10:59.991165 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 01:10:59.991176 | orchestrator | Saturday 13 September 2025 01:10:58 +0000 (0:00:00.280) 0:02:14.237 **** 2025-09-13 01:10:59.991187 | orchestrator | =============================================================================== 2025-09-13 01:10:59.991198 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.60s 2025-09-13 01:10:59.991209 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.48s 2025-09-13 01:10:59.991220 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 28.03s 2025-09-13 01:10:59.991230 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.41s 2025-09-13 01:10:59.991241 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.31s 2025-09-13 01:10:59.991258 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.27s 2025-09-13 01:10:59.991270 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.24s 2025-09-13 01:10:59.991280 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.76s 2025-09-13 01:10:59.991291 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.75s 2025-09-13 01:10:59.991302 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.57s 2025-09-13 01:10:59.991313 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.47s 2025-09-13 01:10:59.991324 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.34s 2025-09-13 01:10:59.991335 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.32s 2025-09-13 01:10:59.991345 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 1.32s 2025-09-13 01:10:59.991356 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.10s 2025-09-13 01:10:59.991367 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.89s 2025-09-13 01:10:59.991378 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.83s 2025-09-13 01:10:59.991388 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.81s 2025-09-13 01:10:59.991399 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.74s 2025-09-13 01:10:59.991410 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.68s 2025-09-13 01:10:59.991421 | orchestrator | 2025-09-13 01:10:59 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:10:59.991432 | orchestrator | 2025-09-13 01:10:59 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:11:03.042160 | orchestrator | 2025-09-13 01:11:03 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:11:03.044606 | orchestrator | 2025-09-13 01:11:03 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:11:03.044637 | orchestrator | 2025-09-13 01:11:03 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:11:06.084837 | orchestrator | 2025-09-13 01:11:06 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:11:06.087399 | orchestrator | 2025-09-13 01:11:06 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:11:06.087557 | orchestrator | 2025-09-13 01:11:06 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:11:09.128509 | orchestrator | 2025-09-13 01:11:09 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:11:09.131313 | orchestrator | 2025-09-13 01:11:09 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:11:09.131346 | orchestrator | 2025-09-13 01:11:09 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:11:12.170462 | orchestrator | 2025-09-13 01:11:12 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:11:12.172512 | orchestrator | 2025-09-13 01:11:12 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state STARTED 2025-09-13 01:11:12.172640 | orchestrator | 2025-09-13 01:11:12 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:11:15.210481 | orchestrator | 2025-09-13 01:11:15 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:11:15.216297 | orchestrator | 2025-09-13 01:11:15 | INFO  | Task 48176d52-70eb-4f83-9826-1dd2e9823469 is in state SUCCESS 2025-09-13 01:11:15.218286 | orchestrator | 2025-09-13 01:11:15.218322 | orchestrator | 2025-09-13 01:11:15.218333 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-13 01:11:15.218344 | orchestrator | 2025-09-13 01:11:15.218448 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-09-13 01:11:15.218463 | orchestrator | Saturday 13 September 2025 01:02:04 +0000 (0:00:00.283) 0:00:00.283 **** 2025-09-13 01:11:15.218474 | orchestrator | changed: [testbed-manager] 2025-09-13 01:11:15.218530 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:11:15.218542 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:11:15.218552 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:11:15.218563 | orchestrator | changed: [testbed-node-3] 2025-09-13 01:11:15.218573 | orchestrator | changed: [testbed-node-4] 2025-09-13 01:11:15.218583 | orchestrator | changed: [testbed-node-5] 2025-09-13 01:11:15.218594 | orchestrator | 2025-09-13 01:11:15.218604 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-13 01:11:15.218615 | orchestrator | Saturday 13 September 2025 01:02:05 +0000 (0:00:01.214) 0:00:01.498 **** 2025-09-13 01:11:15.218625 | orchestrator | changed: [testbed-manager] 2025-09-13 01:11:15.218635 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:11:15.218646 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:11:15.218656 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:11:15.218666 | orchestrator | changed: [testbed-node-3] 2025-09-13 01:11:15.218676 | orchestrator | changed: [testbed-node-4] 2025-09-13 01:11:15.218687 | orchestrator | changed: [testbed-node-5] 2025-09-13 01:11:15.218697 | orchestrator | 2025-09-13 01:11:15.218708 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-13 01:11:15.218718 | orchestrator | Saturday 13 September 2025 01:02:06 +0000 (0:00:00.623) 0:00:02.121 **** 2025-09-13 01:11:15.218729 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-09-13 01:11:15.218739 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-09-13 01:11:15.218749 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-09-13 01:11:15.218760 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-09-13 01:11:15.218770 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-09-13 01:11:15.218780 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-09-13 01:11:15.218790 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-09-13 01:11:15.218824 | orchestrator | 2025-09-13 01:11:15.218835 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-09-13 01:11:15.218846 | orchestrator | 2025-09-13 01:11:15.218858 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-13 01:11:15.218870 | orchestrator | Saturday 13 September 2025 01:02:07 +0000 (0:00:00.847) 0:00:02.969 **** 2025-09-13 01:11:15.218883 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 01:11:15.218895 | orchestrator | 2025-09-13 01:11:15.218907 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-09-13 01:11:15.218919 | orchestrator | Saturday 13 September 2025 01:02:08 +0000 (0:00:01.056) 0:00:04.025 **** 2025-09-13 01:11:15.218930 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-09-13 01:11:15.218942 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-09-13 01:11:15.218953 | orchestrator | 2025-09-13 01:11:15.218964 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-09-13 01:11:15.218975 | orchestrator | Saturday 13 September 2025 01:02:12 +0000 (0:00:04.066) 0:00:08.092 **** 2025-09-13 01:11:15.218987 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-13 01:11:15.218998 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-13 01:11:15.219029 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:11:15.219041 | orchestrator | 2025-09-13 01:11:15.219052 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-13 01:11:15.219062 | orchestrator | Saturday 13 September 2025 01:02:16 +0000 (0:00:04.161) 0:00:12.253 **** 2025-09-13 01:11:15.219073 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:11:15.219084 | orchestrator | 2025-09-13 01:11:15.219094 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-09-13 01:11:15.219105 | orchestrator | Saturday 13 September 2025 01:02:17 +0000 (0:00:00.685) 0:00:12.938 **** 2025-09-13 01:11:15.219116 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:11:15.219126 | orchestrator | 2025-09-13 01:11:15.219137 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-09-13 01:11:15.219148 | orchestrator | Saturday 13 September 2025 01:02:18 +0000 (0:00:01.589) 0:00:14.528 **** 2025-09-13 01:11:15.219685 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:11:15.219699 | orchestrator | 2025-09-13 01:11:15.219723 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-13 01:11:15.219733 | orchestrator | Saturday 13 September 2025 01:02:21 +0000 (0:00:02.784) 0:00:17.312 **** 2025-09-13 01:11:15.219743 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.219752 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.219762 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.219772 | orchestrator | 2025-09-13 01:11:15.219783 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-13 01:11:15.219792 | orchestrator | Saturday 13 September 2025 01:02:22 +0000 (0:00:00.383) 0:00:17.696 **** 2025-09-13 01:11:15.219802 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:11:15.219812 | orchestrator | 2025-09-13 01:11:15.219821 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-09-13 01:11:15.219831 | orchestrator | Saturday 13 September 2025 01:02:53 +0000 (0:00:30.951) 0:00:48.648 **** 2025-09-13 01:11:15.219841 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:11:15.219850 | orchestrator | 2025-09-13 01:11:15.219860 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-13 01:11:15.219870 | orchestrator | Saturday 13 September 2025 01:03:05 +0000 (0:00:12.050) 0:01:00.698 **** 2025-09-13 01:11:15.219879 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:11:15.219889 | orchestrator | 2025-09-13 01:11:15.219898 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-13 01:11:15.219908 | orchestrator | Saturday 13 September 2025 01:03:15 +0000 (0:00:10.762) 0:01:11.461 **** 2025-09-13 01:11:15.219949 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:11:15.219960 | orchestrator | 2025-09-13 01:11:15.219969 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-09-13 01:11:15.219991 | orchestrator | Saturday 13 September 2025 01:03:16 +0000 (0:00:01.050) 0:01:12.511 **** 2025-09-13 01:11:15.220001 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.220047 | orchestrator | 2025-09-13 01:11:15.220058 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-13 01:11:15.220067 | orchestrator | Saturday 13 September 2025 01:03:17 +0000 (0:00:00.473) 0:01:12.984 **** 2025-09-13 01:11:15.220077 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 01:11:15.220087 | orchestrator | 2025-09-13 01:11:15.220097 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-13 01:11:15.220106 | orchestrator | Saturday 13 September 2025 01:03:17 +0000 (0:00:00.504) 0:01:13.489 **** 2025-09-13 01:11:15.220116 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:11:15.220125 | orchestrator | 2025-09-13 01:11:15.220135 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-13 01:11:15.220145 | orchestrator | Saturday 13 September 2025 01:03:35 +0000 (0:00:17.718) 0:01:31.207 **** 2025-09-13 01:11:15.220154 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.220164 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.220173 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.220183 | orchestrator | 2025-09-13 01:11:15.220193 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-09-13 01:11:15.220202 | orchestrator | 2025-09-13 01:11:15.220212 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-13 01:11:15.220824 | orchestrator | Saturday 13 September 2025 01:03:35 +0000 (0:00:00.319) 0:01:31.526 **** 2025-09-13 01:11:15.220841 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 01:11:15.220850 | orchestrator | 2025-09-13 01:11:15.220860 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-09-13 01:11:15.220870 | orchestrator | Saturday 13 September 2025 01:03:36 +0000 (0:00:00.578) 0:01:32.105 **** 2025-09-13 01:11:15.220879 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.220889 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.220899 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:11:15.220908 | orchestrator | 2025-09-13 01:11:15.220918 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-09-13 01:11:15.220927 | orchestrator | Saturday 13 September 2025 01:03:38 +0000 (0:00:02.119) 0:01:34.224 **** 2025-09-13 01:11:15.220937 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.220947 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.220956 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:11:15.220966 | orchestrator | 2025-09-13 01:11:15.220975 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-13 01:11:15.220985 | orchestrator | Saturday 13 September 2025 01:03:41 +0000 (0:00:02.372) 0:01:36.597 **** 2025-09-13 01:11:15.220995 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.221004 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.221058 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.221069 | orchestrator | 2025-09-13 01:11:15.221079 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-13 01:11:15.221088 | orchestrator | Saturday 13 September 2025 01:03:41 +0000 (0:00:00.614) 0:01:37.211 **** 2025-09-13 01:11:15.221098 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-13 01:11:15.221108 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.221117 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-13 01:11:15.221127 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.221136 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-13 01:11:15.221146 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-09-13 01:11:15.221156 | orchestrator | 2025-09-13 01:11:15.221166 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-13 01:11:15.221194 | orchestrator | Saturday 13 September 2025 01:03:50 +0000 (0:00:08.768) 0:01:45.979 **** 2025-09-13 01:11:15.221204 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.221214 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.221223 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.221233 | orchestrator | 2025-09-13 01:11:15.221243 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-13 01:11:15.221252 | orchestrator | Saturday 13 September 2025 01:03:50 +0000 (0:00:00.459) 0:01:46.438 **** 2025-09-13 01:11:15.221270 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-13 01:11:15.221280 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.221289 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-13 01:11:15.221299 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.221309 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-13 01:11:15.221318 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.221328 | orchestrator | 2025-09-13 01:11:15.221337 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-13 01:11:15.221347 | orchestrator | Saturday 13 September 2025 01:03:51 +0000 (0:00:00.670) 0:01:47.109 **** 2025-09-13 01:11:15.221357 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.221366 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.221376 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:11:15.221386 | orchestrator | 2025-09-13 01:11:15.221395 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-09-13 01:11:15.221405 | orchestrator | Saturday 13 September 2025 01:03:52 +0000 (0:00:00.552) 0:01:47.661 **** 2025-09-13 01:11:15.221415 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.221424 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.221433 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:11:15.221443 | orchestrator | 2025-09-13 01:11:15.221452 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-09-13 01:11:15.221462 | orchestrator | Saturday 13 September 2025 01:03:53 +0000 (0:00:00.999) 0:01:48.660 **** 2025-09-13 01:11:15.221472 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.221481 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.221578 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:11:15.221593 | orchestrator | 2025-09-13 01:11:15.221603 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-09-13 01:11:15.221612 | orchestrator | Saturday 13 September 2025 01:03:56 +0000 (0:00:03.286) 0:01:51.947 **** 2025-09-13 01:11:15.221622 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.221632 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.221641 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:11:15.221651 | orchestrator | 2025-09-13 01:11:15.221660 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-13 01:11:15.221670 | orchestrator | Saturday 13 September 2025 01:04:17 +0000 (0:00:21.036) 0:02:12.984 **** 2025-09-13 01:11:15.221680 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.221689 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.221699 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:11:15.221708 | orchestrator | 2025-09-13 01:11:15.221718 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-13 01:11:15.221727 | orchestrator | Saturday 13 September 2025 01:04:30 +0000 (0:00:13.010) 0:02:25.995 **** 2025-09-13 01:11:15.221737 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:11:15.221747 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.221756 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.221766 | orchestrator | 2025-09-13 01:11:15.221775 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-09-13 01:11:15.221785 | orchestrator | Saturday 13 September 2025 01:04:31 +0000 (0:00:00.946) 0:02:26.941 **** 2025-09-13 01:11:15.221795 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.221804 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.221822 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:11:15.221832 | orchestrator | 2025-09-13 01:11:15.221841 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-09-13 01:11:15.221851 | orchestrator | Saturday 13 September 2025 01:04:42 +0000 (0:00:11.351) 0:02:38.292 **** 2025-09-13 01:11:15.221860 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.221870 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.221880 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.221889 | orchestrator | 2025-09-13 01:11:15.221899 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-13 01:11:15.221908 | orchestrator | Saturday 13 September 2025 01:04:43 +0000 (0:00:01.069) 0:02:39.362 **** 2025-09-13 01:11:15.221918 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.221928 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.221937 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.221946 | orchestrator | 2025-09-13 01:11:15.221956 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-09-13 01:11:15.221966 | orchestrator | 2025-09-13 01:11:15.221976 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-13 01:11:15.221985 | orchestrator | Saturday 13 September 2025 01:04:44 +0000 (0:00:00.505) 0:02:39.867 **** 2025-09-13 01:11:15.221995 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 01:11:15.222006 | orchestrator | 2025-09-13 01:11:15.222066 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-09-13 01:11:15.222076 | orchestrator | Saturday 13 September 2025 01:04:44 +0000 (0:00:00.539) 0:02:40.407 **** 2025-09-13 01:11:15.222086 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-09-13 01:11:15.222095 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-09-13 01:11:15.222105 | orchestrator | 2025-09-13 01:11:15.222114 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-09-13 01:11:15.222124 | orchestrator | Saturday 13 September 2025 01:04:47 +0000 (0:00:03.125) 0:02:43.532 **** 2025-09-13 01:11:15.222134 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-09-13 01:11:15.222145 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-09-13 01:11:15.222155 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-09-13 01:11:15.222164 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-09-13 01:11:15.222174 | orchestrator | 2025-09-13 01:11:15.222183 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-09-13 01:11:15.222199 | orchestrator | Saturday 13 September 2025 01:04:54 +0000 (0:00:06.243) 0:02:49.776 **** 2025-09-13 01:11:15.222208 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-13 01:11:15.222218 | orchestrator | 2025-09-13 01:11:15.222228 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-09-13 01:11:15.222237 | orchestrator | Saturday 13 September 2025 01:04:57 +0000 (0:00:03.173) 0:02:52.949 **** 2025-09-13 01:11:15.222246 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-13 01:11:15.222256 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-09-13 01:11:15.222265 | orchestrator | 2025-09-13 01:11:15.222275 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-09-13 01:11:15.222284 | orchestrator | Saturday 13 September 2025 01:05:01 +0000 (0:00:03.740) 0:02:56.690 **** 2025-09-13 01:11:15.222294 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-13 01:11:15.222303 | orchestrator | 2025-09-13 01:11:15.222313 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-09-13 01:11:15.222323 | orchestrator | Saturday 13 September 2025 01:05:04 +0000 (0:00:03.877) 0:03:00.568 **** 2025-09-13 01:11:15.222360 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-09-13 01:11:15.222370 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-09-13 01:11:15.222379 | orchestrator | 2025-09-13 01:11:15.222389 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-13 01:11:15.222474 | orchestrator | Saturday 13 September 2025 01:05:13 +0000 (0:00:08.478) 0:03:09.046 **** 2025-09-13 01:11:15.222494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-13 01:11:15.222511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-13 01:11:15.222530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-13 01:11:15.222583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.222596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.222607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.222617 | orchestrator | 2025-09-13 01:11:15.222627 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-09-13 01:11:15.222637 | orchestrator | Saturday 13 September 2025 01:05:14 +0000 (0:00:01.507) 0:03:10.553 **** 2025-09-13 01:11:15.222647 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.222657 | orchestrator | 2025-09-13 01:11:15.222666 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-09-13 01:11:15.222676 | orchestrator | Saturday 13 September 2025 01:05:15 +0000 (0:00:00.291) 0:03:10.845 **** 2025-09-13 01:11:15.222685 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.222695 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.222704 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.222714 | orchestrator | 2025-09-13 01:11:15.222723 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-09-13 01:11:15.222733 | orchestrator | Saturday 13 September 2025 01:05:15 +0000 (0:00:00.521) 0:03:11.366 **** 2025-09-13 01:11:15.222742 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-13 01:11:15.222752 | orchestrator | 2025-09-13 01:11:15.222761 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-09-13 01:11:15.222771 | orchestrator | Saturday 13 September 2025 01:05:16 +0000 (0:00:01.160) 0:03:12.526 **** 2025-09-13 01:11:15.222781 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.222790 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.222799 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.222809 | orchestrator | 2025-09-13 01:11:15.222818 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-13 01:11:15.222828 | orchestrator | Saturday 13 September 2025 01:05:17 +0000 (0:00:00.399) 0:03:12.925 **** 2025-09-13 01:11:15.222837 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 01:11:15.222862 | orchestrator | 2025-09-13 01:11:15.222873 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-13 01:11:15.222889 | orchestrator | Saturday 13 September 2025 01:05:17 +0000 (0:00:00.615) 0:03:13.541 **** 2025-09-13 01:11:15.222905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-13 01:11:15.222946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-13 01:11:15.222960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-13 01:11:15.222971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.222993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.223051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.223064 | orchestrator | 2025-09-13 01:11:15.223074 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-13 01:11:15.223085 | orchestrator | Saturday 13 September 2025 01:05:20 +0000 (0:00:02.697) 0:03:16.238 **** 2025-09-13 01:11:15.223095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-13 01:11:15.223107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-13 01:11:15.223117 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.223132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-13 01:11:15.223151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-13 01:11:15.223162 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.223199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-13 01:11:15.223213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-13 01:11:15.223223 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.223234 | orchestrator | 2025-09-13 01:11:15.223244 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-13 01:11:15.223254 | orchestrator | Saturday 13 September 2025 01:05:21 +0000 (0:00:01.323) 0:03:17.562 **** 2025-09-13 01:11:15.223264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-13 01:11:15.223286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-13 01:11:15.223297 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.223334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-13 01:11:15.223347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-13 01:11:15.223358 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.223368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-13 01:11:15.223393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-13 01:11:15.223405 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.223415 | orchestrator | 2025-09-13 01:11:15.223424 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-09-13 01:11:15.223435 | orchestrator | Saturday 13 September 2025 01:05:23 +0000 (0:00:01.666) 0:03:19.228 **** 2025-09-13 01:11:15.223469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-13 01:11:15.223483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-13 01:11:15.223500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-13 01:11:15.223516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.223553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.223565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.223576 | orchestrator | 2025-09-13 01:11:15.223586 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-09-13 01:11:15.223597 | orchestrator | Saturday 13 September 2025 01:05:26 +0000 (0:00:02.861) 0:03:22.090 **** 2025-09-13 01:11:15.223607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-13 01:11:15.223629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-13 01:11:15.223665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-13 01:11:15.223679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.223690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.223707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.223717 | orchestrator | 2025-09-13 01:11:15.223727 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-09-13 01:11:15.223737 | orchestrator | Saturday 13 September 2025 01:05:35 +0000 (0:00:09.213) 0:03:31.304 **** 2025-09-13 01:11:15.223753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-13 01:11:15.223788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-13 01:11:15.223800 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.223811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-13 01:11:15.223828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-13 01:11:15.223838 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.223853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-13 01:11:15.223865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-13 01:11:15.223875 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.223885 | orchestrator | 2025-09-13 01:11:15.223895 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-09-13 01:11:15.223905 | orchestrator | Saturday 13 September 2025 01:05:36 +0000 (0:00:01.042) 0:03:32.347 **** 2025-09-13 01:11:15.223915 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:11:15.223925 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:11:15.223934 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:11:15.223944 | orchestrator | 2025-09-13 01:11:15.223978 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-09-13 01:11:15.223990 | orchestrator | Saturday 13 September 2025 01:05:38 +0000 (0:00:01.853) 0:03:34.200 **** 2025-09-13 01:11:15.224000 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.224142 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.224155 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.224164 | orchestrator | 2025-09-13 01:11:15.224174 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-09-13 01:11:15.224183 | orchestrator | Saturday 13 September 2025 01:05:39 +0000 (0:00:00.569) 0:03:34.770 **** 2025-09-13 01:11:15.224194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-13 01:11:15.224213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-13 01:11:15.224263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-13 01:11:15.224276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.224293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.224303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.224313 | orchestrator | 2025-09-13 01:11:15.224323 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-13 01:11:15.224333 | orchestrator | Saturday 13 September 2025 01:05:41 +0000 (0:00:02.467) 0:03:37.237 **** 2025-09-13 01:11:15.224342 | orchestrator | 2025-09-13 01:11:15.224352 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-13 01:11:15.224361 | orchestrator | Saturday 13 September 2025 01:05:41 +0000 (0:00:00.260) 0:03:37.498 **** 2025-09-13 01:11:15.224371 | orchestrator | 2025-09-13 01:11:15.224380 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-13 01:11:15.224390 | orchestrator | Saturday 13 September 2025 01:05:42 +0000 (0:00:00.305) 0:03:37.803 **** 2025-09-13 01:11:15.224399 | orchestrator | 2025-09-13 01:11:15.224409 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-09-13 01:11:15.224418 | orchestrator | Saturday 13 September 2025 01:05:42 +0000 (0:00:00.260) 0:03:38.064 **** 2025-09-13 01:11:15.224428 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:11:15.224437 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:11:15.224447 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:11:15.224456 | orchestrator | 2025-09-13 01:11:15.224466 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-09-13 01:11:15.224476 | orchestrator | Saturday 13 September 2025 01:06:07 +0000 (0:00:25.431) 0:04:03.495 **** 2025-09-13 01:11:15.224485 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:11:15.224495 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:11:15.224504 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:11:15.224513 | orchestrator | 2025-09-13 01:11:15.224523 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-09-13 01:11:15.224532 | orchestrator | 2025-09-13 01:11:15.224542 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-13 01:11:15.224551 | orchestrator | Saturday 13 September 2025 01:06:20 +0000 (0:00:12.433) 0:04:15.928 **** 2025-09-13 01:11:15.224566 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 01:11:15.224576 | orchestrator | 2025-09-13 01:11:15.224585 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-13 01:11:15.224595 | orchestrator | Saturday 13 September 2025 01:06:23 +0000 (0:00:02.915) 0:04:18.844 **** 2025-09-13 01:11:15.224604 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:11:15.224614 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:11:15.224623 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:11:15.224639 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.224649 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.224658 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.224668 | orchestrator | 2025-09-13 01:11:15.224677 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-09-13 01:11:15.224687 | orchestrator | Saturday 13 September 2025 01:06:24 +0000 (0:00:01.162) 0:04:20.007 **** 2025-09-13 01:11:15.224697 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.224706 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.224716 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.224725 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 01:11:15.224735 | orchestrator | 2025-09-13 01:11:15.224744 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-13 01:11:15.224781 | orchestrator | Saturday 13 September 2025 01:06:26 +0000 (0:00:02.104) 0:04:22.111 **** 2025-09-13 01:11:15.224792 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-09-13 01:11:15.224802 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-09-13 01:11:15.224812 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-09-13 01:11:15.224821 | orchestrator | 2025-09-13 01:11:15.224831 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-13 01:11:15.224841 | orchestrator | Saturday 13 September 2025 01:06:27 +0000 (0:00:01.088) 0:04:23.199 **** 2025-09-13 01:11:15.224851 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-09-13 01:11:15.224860 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-09-13 01:11:15.224870 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-09-13 01:11:15.224879 | orchestrator | 2025-09-13 01:11:15.224889 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-13 01:11:15.224898 | orchestrator | Saturday 13 September 2025 01:06:28 +0000 (0:00:01.363) 0:04:24.563 **** 2025-09-13 01:11:15.224908 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-09-13 01:11:15.224917 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:11:15.224927 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-09-13 01:11:15.224936 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:11:15.224946 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-09-13 01:11:15.224955 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:11:15.224965 | orchestrator | 2025-09-13 01:11:15.224975 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-09-13 01:11:15.224984 | orchestrator | Saturday 13 September 2025 01:06:30 +0000 (0:00:01.607) 0:04:26.171 **** 2025-09-13 01:11:15.224994 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-13 01:11:15.225003 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-13 01:11:15.225030 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.225040 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-13 01:11:15.225050 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-13 01:11:15.225059 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-13 01:11:15.225069 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-13 01:11:15.225078 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.225088 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-13 01:11:15.225098 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-13 01:11:15.225107 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.225117 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-13 01:11:15.225126 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-13 01:11:15.225146 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-13 01:11:15.225156 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-13 01:11:15.225165 | orchestrator | 2025-09-13 01:11:15.225175 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-09-13 01:11:15.225184 | orchestrator | Saturday 13 September 2025 01:06:31 +0000 (0:00:01.350) 0:04:27.521 **** 2025-09-13 01:11:15.225193 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.225203 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.225213 | orchestrator | changed: [testbed-node-3] 2025-09-13 01:11:15.225222 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.225232 | orchestrator | changed: [testbed-node-4] 2025-09-13 01:11:15.225241 | orchestrator | changed: [testbed-node-5] 2025-09-13 01:11:15.225250 | orchestrator | 2025-09-13 01:11:15.225260 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-09-13 01:11:15.225269 | orchestrator | Saturday 13 September 2025 01:06:33 +0000 (0:00:01.839) 0:04:29.361 **** 2025-09-13 01:11:15.225279 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.225289 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.225298 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.225308 | orchestrator | changed: [testbed-node-5] 2025-09-13 01:11:15.225317 | orchestrator | changed: [testbed-node-4] 2025-09-13 01:11:15.225327 | orchestrator | changed: [testbed-node-3] 2025-09-13 01:11:15.225336 | orchestrator | 2025-09-13 01:11:15.225351 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-13 01:11:15.225361 | orchestrator | Saturday 13 September 2025 01:06:35 +0000 (0:00:02.184) 0:04:31.546 **** 2025-09-13 01:11:15.225372 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-13 01:11:15.225412 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-13 01:11:15.225425 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-13 01:11:15.225442 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-13 01:11:15.225457 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.225468 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-13 01:11:15.225505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-13 01:11:15.225518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-13 01:11:15.225528 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-13 01:11:15.225544 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.225554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-13 01:11:15.225568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.225579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.225616 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.225628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.225644 | orchestrator | 2025-09-13 01:11:15.225654 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-13 01:11:15.225664 | orchestrator | Saturday 13 September 2025 01:06:39 +0000 (0:00:03.602) 0:04:35.148 **** 2025-09-13 01:11:15.225674 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 01:11:15.225683 | orchestrator | 2025-09-13 01:11:15.225693 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-13 01:11:15.225703 | orchestrator | Saturday 13 September 2025 01:06:41 +0000 (0:00:01.793) 0:04:36.942 **** 2025-09-13 01:11:15.225713 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-13 01:11:15.225728 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-13 01:11:15.225764 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-13 01:11:15.225776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-13 01:11:15.225793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-13 01:11:15.225803 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-13 01:11:15.225813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-13 01:11:15.225827 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-13 01:11:15.225837 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-13 01:11:15.225874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.225886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.225902 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.225912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.225922 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.225938 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.225948 | orchestrator | 2025-09-13 01:11:15.225958 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-13 01:11:15.225968 | orchestrator | Saturday 13 September 2025 01:06:45 +0000 (0:00:04.456) 0:04:41.398 **** 2025-09-13 01:11:15.226004 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-13 01:11:15.226065 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-13 01:11:15.226076 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-13 01:11:15.226086 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:11:15.226097 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-13 01:11:15.226111 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-13 01:11:15.226151 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-13 01:11:15.226170 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:11:15.226180 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-13 01:11:15.226190 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-13 01:11:15.226200 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-13 01:11:15.226211 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:11:15.226229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-13 01:11:15.226239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-13 01:11:15.226249 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.226287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-13 01:11:15.226307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-13 01:11:15.226317 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.226327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-13 01:11:15.226337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-13 01:11:15.226346 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.226356 | orchestrator | 2025-09-13 01:11:15.226366 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-13 01:11:15.226375 | orchestrator | Saturday 13 September 2025 01:06:48 +0000 (0:00:02.996) 0:04:44.395 **** 2025-09-13 01:11:15.226390 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-13 01:11:15.226401 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-13 01:11:15.226444 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-13 01:11:15.226456 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:11:15.226466 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-13 01:11:15.226476 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-13 01:11:15.226486 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-13 01:11:15.226496 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:11:15.226511 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-13 01:11:15.226552 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-13 01:11:15.226564 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-13 01:11:15.226574 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:11:15.226584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-13 01:11:15.226594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-13 01:11:15.226604 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.226614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-13 01:11:15.226629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-13 01:11:15.226645 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.226655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-13 01:11:15.226691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-13 01:11:15.226703 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.226713 | orchestrator | 2025-09-13 01:11:15.226722 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-13 01:11:15.226732 | orchestrator | Saturday 13 September 2025 01:06:50 +0000 (0:00:02.090) 0:04:46.486 **** 2025-09-13 01:11:15.226741 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.226751 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.226760 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.226770 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-13 01:11:15.226780 | orchestrator | 2025-09-13 01:11:15.226789 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-09-13 01:11:15.226799 | orchestrator | Saturday 13 September 2025 01:06:51 +0000 (0:00:00.692) 0:04:47.178 **** 2025-09-13 01:11:15.226808 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-13 01:11:15.226818 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-13 01:11:15.226827 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-13 01:11:15.226837 | orchestrator | 2025-09-13 01:11:15.226846 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-09-13 01:11:15.226856 | orchestrator | Saturday 13 September 2025 01:06:52 +0000 (0:00:00.846) 0:04:48.025 **** 2025-09-13 01:11:15.226865 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-13 01:11:15.226875 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-13 01:11:15.226885 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-13 01:11:15.226894 | orchestrator | 2025-09-13 01:11:15.226904 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-09-13 01:11:15.226913 | orchestrator | Saturday 13 September 2025 01:06:53 +0000 (0:00:00.802) 0:04:48.827 **** 2025-09-13 01:11:15.226923 | orchestrator | ok: [testbed-node-3] 2025-09-13 01:11:15.226932 | orchestrator | ok: [testbed-node-4] 2025-09-13 01:11:15.226942 | orchestrator | ok: [testbed-node-5] 2025-09-13 01:11:15.226951 | orchestrator | 2025-09-13 01:11:15.226961 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-09-13 01:11:15.226970 | orchestrator | Saturday 13 September 2025 01:06:53 +0000 (0:00:00.451) 0:04:49.279 **** 2025-09-13 01:11:15.226980 | orchestrator | ok: [testbed-node-3] 2025-09-13 01:11:15.226990 | orchestrator | ok: [testbed-node-4] 2025-09-13 01:11:15.226999 | orchestrator | ok: [testbed-node-5] 2025-09-13 01:11:15.227009 | orchestrator | 2025-09-13 01:11:15.227064 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-09-13 01:11:15.227081 | orchestrator | Saturday 13 September 2025 01:06:54 +0000 (0:00:00.707) 0:04:49.986 **** 2025-09-13 01:11:15.227090 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-13 01:11:15.227100 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-13 01:11:15.227109 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-13 01:11:15.227119 | orchestrator | 2025-09-13 01:11:15.227128 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-09-13 01:11:15.227138 | orchestrator | Saturday 13 September 2025 01:06:55 +0000 (0:00:01.003) 0:04:50.990 **** 2025-09-13 01:11:15.227147 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-13 01:11:15.227157 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-13 01:11:15.227166 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-13 01:11:15.227176 | orchestrator | 2025-09-13 01:11:15.227185 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-09-13 01:11:15.227195 | orchestrator | Saturday 13 September 2025 01:06:56 +0000 (0:00:01.077) 0:04:52.067 **** 2025-09-13 01:11:15.227204 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-13 01:11:15.227212 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-13 01:11:15.227220 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-13 01:11:15.227232 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-09-13 01:11:15.227240 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-09-13 01:11:15.227247 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-09-13 01:11:15.227255 | orchestrator | 2025-09-13 01:11:15.227263 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-09-13 01:11:15.227270 | orchestrator | Saturday 13 September 2025 01:07:00 +0000 (0:00:03.739) 0:04:55.807 **** 2025-09-13 01:11:15.227278 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:11:15.227286 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:11:15.227294 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:11:15.227302 | orchestrator | 2025-09-13 01:11:15.227309 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-09-13 01:11:15.227317 | orchestrator | Saturday 13 September 2025 01:07:00 +0000 (0:00:00.504) 0:04:56.311 **** 2025-09-13 01:11:15.227325 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:11:15.227333 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:11:15.227340 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:11:15.227348 | orchestrator | 2025-09-13 01:11:15.227356 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-09-13 01:11:15.227364 | orchestrator | Saturday 13 September 2025 01:07:01 +0000 (0:00:00.322) 0:04:56.633 **** 2025-09-13 01:11:15.227371 | orchestrator | changed: [testbed-node-3] 2025-09-13 01:11:15.227379 | orchestrator | changed: [testbed-node-5] 2025-09-13 01:11:15.227387 | orchestrator | changed: [testbed-node-4] 2025-09-13 01:11:15.227395 | orchestrator | 2025-09-13 01:11:15.227428 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-09-13 01:11:15.227437 | orchestrator | Saturday 13 September 2025 01:07:02 +0000 (0:00:01.255) 0:04:57.889 **** 2025-09-13 01:11:15.227445 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-13 01:11:15.227454 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-13 01:11:15.227462 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-13 01:11:15.227469 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-13 01:11:15.227478 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-13 01:11:15.227491 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-13 01:11:15.227499 | orchestrator | 2025-09-13 01:11:15.227506 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-09-13 01:11:15.227515 | orchestrator | Saturday 13 September 2025 01:07:05 +0000 (0:00:03.240) 0:05:01.129 **** 2025-09-13 01:11:15.227522 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-13 01:11:15.227530 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-13 01:11:15.227538 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-13 01:11:15.227546 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-13 01:11:15.227554 | orchestrator | changed: [testbed-node-5] 2025-09-13 01:11:15.227561 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-13 01:11:15.227569 | orchestrator | changed: [testbed-node-3] 2025-09-13 01:11:15.227577 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-13 01:11:15.227584 | orchestrator | changed: [testbed-node-4] 2025-09-13 01:11:15.227592 | orchestrator | 2025-09-13 01:11:15.227600 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-09-13 01:11:15.227608 | orchestrator | Saturday 13 September 2025 01:07:08 +0000 (0:00:03.295) 0:05:04.425 **** 2025-09-13 01:11:15.227615 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:11:15.227623 | orchestrator | 2025-09-13 01:11:15.227631 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-09-13 01:11:15.227639 | orchestrator | Saturday 13 September 2025 01:07:08 +0000 (0:00:00.130) 0:05:04.556 **** 2025-09-13 01:11:15.227646 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:11:15.227654 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:11:15.227662 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:11:15.227669 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.227677 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.227685 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.227692 | orchestrator | 2025-09-13 01:11:15.227700 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-09-13 01:11:15.227708 | orchestrator | Saturday 13 September 2025 01:07:09 +0000 (0:00:00.498) 0:05:05.054 **** 2025-09-13 01:11:15.227715 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-13 01:11:15.227723 | orchestrator | 2025-09-13 01:11:15.227731 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-09-13 01:11:15.227739 | orchestrator | Saturday 13 September 2025 01:07:10 +0000 (0:00:00.611) 0:05:05.665 **** 2025-09-13 01:11:15.227746 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:11:15.227754 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:11:15.227762 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:11:15.227770 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.227777 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.227785 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.227793 | orchestrator | 2025-09-13 01:11:15.227801 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-09-13 01:11:15.227808 | orchestrator | Saturday 13 September 2025 01:07:10 +0000 (0:00:00.642) 0:05:06.308 **** 2025-09-13 01:11:15.227820 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-13 01:11:15.227841 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-13 01:11:15.227850 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-13 01:11:15.227858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-13 01:11:15.227867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-13 01:11:15.227879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-13 01:11:15.227887 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-13 01:11:15.227906 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-13 01:11:15.227915 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-13 01:11:15.227923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.227931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.227939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.227954 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.227973 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.227982 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.227990 | orchestrator | 2025-09-13 01:11:15.227998 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-09-13 01:11:15.228006 | orchestrator | Saturday 13 September 2025 01:07:14 +0000 (0:00:03.702) 0:05:10.011 **** 2025-09-13 01:11:15.228029 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-13 01:11:15.228037 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-13 01:11:15.228050 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-13 01:11:15.228068 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-13 01:11:15.228081 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-13 01:11:15.228090 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-13 01:11:15.228098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-13 01:11:15.228106 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.228118 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.228136 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.228145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-13 01:11:15.228153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-13 01:11:15.228161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.228169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.228181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.228194 | orchestrator | 2025-09-13 01:11:15.228202 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-09-13 01:11:15.228210 | orchestrator | Saturday 13 September 2025 01:07:22 +0000 (0:00:07.823) 0:05:17.834 **** 2025-09-13 01:11:15.228218 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:11:15.228226 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:11:15.228234 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.228242 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:11:15.228250 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.228257 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.228265 | orchestrator | 2025-09-13 01:11:15.228273 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-09-13 01:11:15.228281 | orchestrator | Saturday 13 September 2025 01:07:23 +0000 (0:00:01.255) 0:05:19.090 **** 2025-09-13 01:11:15.228288 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-13 01:11:15.228296 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-13 01:11:15.228304 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-13 01:11:15.228312 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-13 01:11:15.228324 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-13 01:11:15.228332 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-13 01:11:15.228340 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.228348 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-13 01:11:15.228355 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-13 01:11:15.228363 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.228371 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-13 01:11:15.228378 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.228386 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-13 01:11:15.228394 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-13 01:11:15.228402 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-13 01:11:15.228409 | orchestrator | 2025-09-13 01:11:15.228417 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-09-13 01:11:15.228425 | orchestrator | Saturday 13 September 2025 01:07:27 +0000 (0:00:04.391) 0:05:23.481 **** 2025-09-13 01:11:15.228433 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:11:15.228440 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:11:15.228448 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:11:15.228456 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.228464 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.228471 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.228479 | orchestrator | 2025-09-13 01:11:15.228487 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-09-13 01:11:15.228494 | orchestrator | Saturday 13 September 2025 01:07:28 +0000 (0:00:00.807) 0:05:24.289 **** 2025-09-13 01:11:15.228502 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-13 01:11:15.228515 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-13 01:11:15.228523 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-13 01:11:15.228531 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-13 01:11:15.228539 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-13 01:11:15.228547 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-13 01:11:15.228554 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-13 01:11:15.228562 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-13 01:11:15.228570 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-13 01:11:15.228577 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-13 01:11:15.228585 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.228593 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-13 01:11:15.228601 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.228608 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-13 01:11:15.228616 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.228628 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-13 01:11:15.228636 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-13 01:11:15.228644 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-13 01:11:15.228651 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-13 01:11:15.228659 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-13 01:11:15.228667 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-13 01:11:15.228675 | orchestrator | 2025-09-13 01:11:15.228683 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-09-13 01:11:15.228690 | orchestrator | Saturday 13 September 2025 01:07:35 +0000 (0:00:06.766) 0:05:31.055 **** 2025-09-13 01:11:15.228698 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-13 01:11:15.228706 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-13 01:11:15.228717 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-13 01:11:15.228725 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-13 01:11:15.228733 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-13 01:11:15.228740 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-13 01:11:15.228748 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-13 01:11:15.228756 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-13 01:11:15.228763 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-13 01:11:15.228771 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-13 01:11:15.228784 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-13 01:11:15.228791 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-13 01:11:15.228799 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-13 01:11:15.228807 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.228815 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-13 01:11:15.228822 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.228830 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-13 01:11:15.228838 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-13 01:11:15.228846 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.228854 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-13 01:11:15.228861 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-13 01:11:15.228869 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-13 01:11:15.228877 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-13 01:11:15.228885 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-13 01:11:15.228892 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-13 01:11:15.228900 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-13 01:11:15.228908 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-13 01:11:15.228916 | orchestrator | 2025-09-13 01:11:15.228923 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-09-13 01:11:15.228931 | orchestrator | Saturday 13 September 2025 01:07:44 +0000 (0:00:09.387) 0:05:40.443 **** 2025-09-13 01:11:15.228939 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:11:15.228946 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:11:15.228954 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:11:15.228962 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.228970 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.228977 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.228985 | orchestrator | 2025-09-13 01:11:15.228993 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-09-13 01:11:15.229001 | orchestrator | Saturday 13 September 2025 01:07:45 +0000 (0:00:00.929) 0:05:41.372 **** 2025-09-13 01:11:15.229008 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:11:15.229028 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:11:15.229036 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:11:15.229044 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.229052 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.229059 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.229067 | orchestrator | 2025-09-13 01:11:15.229075 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-09-13 01:11:15.229083 | orchestrator | Saturday 13 September 2025 01:07:46 +0000 (0:00:00.690) 0:05:42.063 **** 2025-09-13 01:11:15.229095 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.229103 | orchestrator | changed: [testbed-node-3] 2025-09-13 01:11:15.229110 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.229118 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.229126 | orchestrator | changed: [testbed-node-4] 2025-09-13 01:11:15.229133 | orchestrator | changed: [testbed-node-5] 2025-09-13 01:11:15.229141 | orchestrator | 2025-09-13 01:11:15.229149 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-09-13 01:11:15.229157 | orchestrator | Saturday 13 September 2025 01:07:49 +0000 (0:00:02.711) 0:05:44.774 **** 2025-09-13 01:11:15.229174 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-13 01:11:15.229183 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-13 01:11:15.229191 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-13 01:11:15.229200 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:11:15.229208 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-13 01:11:15.229216 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-13 01:11:15.229228 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-13 01:11:15.229241 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:11:15.229253 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-13 01:11:15.229262 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-13 01:11:15.229270 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-13 01:11:15.229278 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:11:15.229287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-13 01:11:15.229301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-13 01:11:15.229314 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.229322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-13 01:11:15.229334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-13 01:11:15.229343 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.229351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-13 01:11:15.229359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-13 01:11:15.229367 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.229375 | orchestrator | 2025-09-13 01:11:15.229383 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-09-13 01:11:15.229391 | orchestrator | Saturday 13 September 2025 01:07:51 +0000 (0:00:01.886) 0:05:46.661 **** 2025-09-13 01:11:15.229398 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-13 01:11:15.229406 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-13 01:11:15.229414 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:11:15.229422 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-13 01:11:15.229429 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-13 01:11:15.229437 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:11:15.229445 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-13 01:11:15.229453 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-13 01:11:15.229460 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:11:15.229468 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-13 01:11:15.229481 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-13 01:11:15.229488 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.229496 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-13 01:11:15.229504 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-13 01:11:15.229512 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.229519 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-13 01:11:15.229527 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-13 01:11:15.229535 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.229542 | orchestrator | 2025-09-13 01:11:15.229550 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-09-13 01:11:15.229558 | orchestrator | Saturday 13 September 2025 01:07:51 +0000 (0:00:00.730) 0:05:47.392 **** 2025-09-13 01:11:15.229570 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-13 01:11:15.229582 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-13 01:11:15.229591 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-13 01:11:15.229599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-13 01:11:15.229612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-13 01:11:15.229624 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-13 01:11:15.229633 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-13 01:11:15.229646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-13 01:11:15.229654 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-13 01:11:15.229662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.229671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.229683 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.229695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.229708 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.229716 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-13 01:11:15.229724 | orchestrator | 2025-09-13 01:11:15.229732 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-13 01:11:15.229740 | orchestrator | Saturday 13 September 2025 01:07:54 +0000 (0:00:03.042) 0:05:50.434 **** 2025-09-13 01:11:15.229748 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:11:15.229756 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:11:15.229764 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:11:15.229772 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.229779 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.229787 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.229799 | orchestrator | 2025-09-13 01:11:15.229807 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-13 01:11:15.229815 | orchestrator | Saturday 13 September 2025 01:07:55 +0000 (0:00:00.785) 0:05:51.220 **** 2025-09-13 01:11:15.229823 | orchestrator | 2025-09-13 01:11:15.229831 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-13 01:11:15.229838 | orchestrator | Saturday 13 September 2025 01:07:55 +0000 (0:00:00.135) 0:05:51.355 **** 2025-09-13 01:11:15.229846 | orchestrator | 2025-09-13 01:11:15.229854 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-13 01:11:15.229861 | orchestrator | Saturday 13 September 2025 01:07:55 +0000 (0:00:00.135) 0:05:51.490 **** 2025-09-13 01:11:15.229869 | orchestrator | 2025-09-13 01:11:15.229877 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-13 01:11:15.229884 | orchestrator | Saturday 13 September 2025 01:07:56 +0000 (0:00:00.143) 0:05:51.634 **** 2025-09-13 01:11:15.229892 | orchestrator | 2025-09-13 01:11:15.229899 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-13 01:11:15.229907 | orchestrator | Saturday 13 September 2025 01:07:56 +0000 (0:00:00.132) 0:05:51.766 **** 2025-09-13 01:11:15.229915 | orchestrator | 2025-09-13 01:11:15.229922 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-13 01:11:15.229930 | orchestrator | Saturday 13 September 2025 01:07:56 +0000 (0:00:00.129) 0:05:51.896 **** 2025-09-13 01:11:15.229938 | orchestrator | 2025-09-13 01:11:15.229946 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-09-13 01:11:15.229953 | orchestrator | Saturday 13 September 2025 01:07:56 +0000 (0:00:00.317) 0:05:52.214 **** 2025-09-13 01:11:15.229961 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:11:15.229969 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:11:15.229977 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:11:15.229984 | orchestrator | 2025-09-13 01:11:15.229992 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-09-13 01:11:15.230000 | orchestrator | Saturday 13 September 2025 01:08:08 +0000 (0:00:12.234) 0:06:04.448 **** 2025-09-13 01:11:15.230008 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:11:15.230072 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:11:15.230080 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:11:15.230088 | orchestrator | 2025-09-13 01:11:15.230096 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-09-13 01:11:15.230104 | orchestrator | Saturday 13 September 2025 01:08:30 +0000 (0:00:21.593) 0:06:26.042 **** 2025-09-13 01:11:15.230116 | orchestrator | changed: [testbed-node-4] 2025-09-13 01:11:15.230124 | orchestrator | changed: [testbed-node-5] 2025-09-13 01:11:15.230132 | orchestrator | changed: [testbed-node-3] 2025-09-13 01:11:15.230140 | orchestrator | 2025-09-13 01:11:15.230147 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-09-13 01:11:15.230155 | orchestrator | Saturday 13 September 2025 01:08:50 +0000 (0:00:20.045) 0:06:46.087 **** 2025-09-13 01:11:15.230163 | orchestrator | changed: [testbed-node-3] 2025-09-13 01:11:15.230171 | orchestrator | changed: [testbed-node-5] 2025-09-13 01:11:15.230178 | orchestrator | changed: [testbed-node-4] 2025-09-13 01:11:15.230186 | orchestrator | 2025-09-13 01:11:15.230194 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-09-13 01:11:15.230202 | orchestrator | Saturday 13 September 2025 01:09:33 +0000 (0:00:43.197) 0:07:29.285 **** 2025-09-13 01:11:15.230210 | orchestrator | changed: [testbed-node-3] 2025-09-13 01:11:15.230218 | orchestrator | changed: [testbed-node-4] 2025-09-13 01:11:15.230226 | orchestrator | changed: [testbed-node-5] 2025-09-13 01:11:15.230233 | orchestrator | 2025-09-13 01:11:15.230241 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-09-13 01:11:15.230249 | orchestrator | Saturday 13 September 2025 01:09:34 +0000 (0:00:01.143) 0:07:30.428 **** 2025-09-13 01:11:15.230257 | orchestrator | changed: [testbed-node-3] 2025-09-13 01:11:15.230264 | orchestrator | changed: [testbed-node-4] 2025-09-13 01:11:15.230278 | orchestrator | changed: [testbed-node-5] 2025-09-13 01:11:15.230286 | orchestrator | 2025-09-13 01:11:15.230293 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-09-13 01:11:15.230306 | orchestrator | Saturday 13 September 2025 01:09:35 +0000 (0:00:00.767) 0:07:31.196 **** 2025-09-13 01:11:15.230314 | orchestrator | changed: [testbed-node-4] 2025-09-13 01:11:15.230322 | orchestrator | changed: [testbed-node-5] 2025-09-13 01:11:15.230329 | orchestrator | changed: [testbed-node-3] 2025-09-13 01:11:15.230337 | orchestrator | 2025-09-13 01:11:15.230345 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-09-13 01:11:15.230353 | orchestrator | Saturday 13 September 2025 01:10:01 +0000 (0:00:26.328) 0:07:57.524 **** 2025-09-13 01:11:15.230361 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:11:15.230369 | orchestrator | 2025-09-13 01:11:15.230376 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-09-13 01:11:15.230384 | orchestrator | Saturday 13 September 2025 01:10:02 +0000 (0:00:00.159) 0:07:57.684 **** 2025-09-13 01:11:15.230392 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:11:15.230400 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.230407 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.230415 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.230423 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:11:15.230431 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-09-13 01:11:15.230439 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-13 01:11:15.230446 | orchestrator | 2025-09-13 01:11:15.230454 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-09-13 01:11:15.230462 | orchestrator | Saturday 13 September 2025 01:10:26 +0000 (0:00:24.562) 0:08:22.246 **** 2025-09-13 01:11:15.230470 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.230478 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.230485 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:11:15.230493 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.230501 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:11:15.230508 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:11:15.230516 | orchestrator | 2025-09-13 01:11:15.230524 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-09-13 01:11:15.230532 | orchestrator | Saturday 13 September 2025 01:10:38 +0000 (0:00:11.392) 0:08:33.639 **** 2025-09-13 01:11:15.230540 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:11:15.230547 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.230555 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.230563 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:11:15.230570 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.230576 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-09-13 01:11:15.230583 | orchestrator | 2025-09-13 01:11:15.230589 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-13 01:11:15.230596 | orchestrator | Saturday 13 September 2025 01:10:41 +0000 (0:00:03.823) 0:08:37.462 **** 2025-09-13 01:11:15.230603 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-13 01:11:15.230609 | orchestrator | 2025-09-13 01:11:15.230616 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-13 01:11:15.230623 | orchestrator | Saturday 13 September 2025 01:10:54 +0000 (0:00:12.365) 0:08:49.828 **** 2025-09-13 01:11:15.230629 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-13 01:11:15.230636 | orchestrator | 2025-09-13 01:11:15.230642 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-09-13 01:11:15.230649 | orchestrator | Saturday 13 September 2025 01:10:55 +0000 (0:00:01.434) 0:08:51.263 **** 2025-09-13 01:11:15.230655 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:11:15.230666 | orchestrator | 2025-09-13 01:11:15.230673 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-09-13 01:11:15.230679 | orchestrator | Saturday 13 September 2025 01:10:57 +0000 (0:00:01.431) 0:08:52.694 **** 2025-09-13 01:11:15.230686 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-13 01:11:15.230692 | orchestrator | 2025-09-13 01:11:15.230699 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-09-13 01:11:15.230706 | orchestrator | Saturday 13 September 2025 01:11:07 +0000 (0:00:10.611) 0:09:03.306 **** 2025-09-13 01:11:15.230712 | orchestrator | ok: [testbed-node-3] 2025-09-13 01:11:15.230719 | orchestrator | ok: [testbed-node-4] 2025-09-13 01:11:15.230725 | orchestrator | ok: [testbed-node-5] 2025-09-13 01:11:15.230732 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:11:15.230738 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:11:15.230745 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:11:15.230751 | orchestrator | 2025-09-13 01:11:15.230761 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-09-13 01:11:15.230768 | orchestrator | 2025-09-13 01:11:15.230774 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-09-13 01:11:15.230781 | orchestrator | Saturday 13 September 2025 01:11:09 +0000 (0:00:01.722) 0:09:05.029 **** 2025-09-13 01:11:15.230788 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:11:15.230794 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:11:15.230801 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:11:15.230808 | orchestrator | 2025-09-13 01:11:15.230814 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-09-13 01:11:15.230821 | orchestrator | 2025-09-13 01:11:15.230827 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-09-13 01:11:15.230834 | orchestrator | Saturday 13 September 2025 01:11:10 +0000 (0:00:01.148) 0:09:06.178 **** 2025-09-13 01:11:15.230840 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.230847 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.230854 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.230860 | orchestrator | 2025-09-13 01:11:15.230867 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-09-13 01:11:15.230874 | orchestrator | 2025-09-13 01:11:15.230880 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-09-13 01:11:15.230887 | orchestrator | Saturday 13 September 2025 01:11:11 +0000 (0:00:00.527) 0:09:06.705 **** 2025-09-13 01:11:15.230893 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-09-13 01:11:15.230903 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-13 01:11:15.230910 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-13 01:11:15.230917 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-09-13 01:11:15.230923 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-09-13 01:11:15.230930 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-09-13 01:11:15.230936 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-09-13 01:11:15.230943 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-13 01:11:15.230949 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-13 01:11:15.230956 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-09-13 01:11:15.230963 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-09-13 01:11:15.230969 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-09-13 01:11:15.230976 | orchestrator | skipping: [testbed-node-3] 2025-09-13 01:11:15.230982 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-09-13 01:11:15.230989 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-13 01:11:15.230995 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-13 01:11:15.231002 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-09-13 01:11:15.231027 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-09-13 01:11:15.231034 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-09-13 01:11:15.231040 | orchestrator | skipping: [testbed-node-4] 2025-09-13 01:11:15.231047 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-09-13 01:11:15.231054 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-13 01:11:15.231060 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-13 01:11:15.231067 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-09-13 01:11:15.231073 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-09-13 01:11:15.231080 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-09-13 01:11:15.231086 | orchestrator | skipping: [testbed-node-5] 2025-09-13 01:11:15.231093 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-09-13 01:11:15.231099 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-13 01:11:15.231106 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-13 01:11:15.231112 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-09-13 01:11:15.231119 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-09-13 01:11:15.231125 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-09-13 01:11:15.231132 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.231138 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.231145 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-09-13 01:11:15.231151 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-13 01:11:15.231158 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-13 01:11:15.231164 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-09-13 01:11:15.231171 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-09-13 01:11:15.231177 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-09-13 01:11:15.231184 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.231190 | orchestrator | 2025-09-13 01:11:15.231197 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-09-13 01:11:15.231204 | orchestrator | 2025-09-13 01:11:15.231210 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-09-13 01:11:15.231217 | orchestrator | Saturday 13 September 2025 01:11:12 +0000 (0:00:01.275) 0:09:07.981 **** 2025-09-13 01:11:15.231223 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-09-13 01:11:15.231230 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-09-13 01:11:15.231236 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.231243 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-09-13 01:11:15.231250 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-09-13 01:11:15.231256 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.231266 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-09-13 01:11:15.231273 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-09-13 01:11:15.231280 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.231286 | orchestrator | 2025-09-13 01:11:15.231293 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-09-13 01:11:15.231300 | orchestrator | 2025-09-13 01:11:15.231306 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-09-13 01:11:15.231313 | orchestrator | Saturday 13 September 2025 01:11:13 +0000 (0:00:00.736) 0:09:08.718 **** 2025-09-13 01:11:15.231319 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.231326 | orchestrator | 2025-09-13 01:11:15.231333 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-09-13 01:11:15.231339 | orchestrator | 2025-09-13 01:11:15.231346 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-09-13 01:11:15.231359 | orchestrator | Saturday 13 September 2025 01:11:13 +0000 (0:00:00.652) 0:09:09.370 **** 2025-09-13 01:11:15.231366 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:11:15.231373 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:11:15.231379 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:11:15.231386 | orchestrator | 2025-09-13 01:11:15.231392 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 01:11:15.231399 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-13 01:11:15.231409 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-09-13 01:11:15.231416 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-13 01:11:15.231423 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-13 01:11:15.231430 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-13 01:11:15.231437 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-13 01:11:15.231443 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-13 01:11:15.231450 | orchestrator | 2025-09-13 01:11:15.231456 | orchestrator | 2025-09-13 01:11:15.231463 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 01:11:15.231470 | orchestrator | Saturday 13 September 2025 01:11:14 +0000 (0:00:00.456) 0:09:09.827 **** 2025-09-13 01:11:15.231476 | orchestrator | =============================================================================== 2025-09-13 01:11:15.231483 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 43.20s 2025-09-13 01:11:15.231489 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 30.95s 2025-09-13 01:11:15.231496 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 26.33s 2025-09-13 01:11:15.231503 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 25.43s 2025-09-13 01:11:15.231509 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 24.56s 2025-09-13 01:11:15.231516 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 21.59s 2025-09-13 01:11:15.231522 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.04s 2025-09-13 01:11:15.231529 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 20.05s 2025-09-13 01:11:15.231535 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.72s 2025-09-13 01:11:15.231542 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.01s 2025-09-13 01:11:15.231548 | orchestrator | nova : Restart nova-api container -------------------------------------- 12.43s 2025-09-13 01:11:15.231555 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.37s 2025-09-13 01:11:15.231561 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.23s 2025-09-13 01:11:15.231568 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 12.05s 2025-09-13 01:11:15.231574 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 11.39s 2025-09-13 01:11:15.231581 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.35s 2025-09-13 01:11:15.231587 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.76s 2025-09-13 01:11:15.231594 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.61s 2025-09-13 01:11:15.231604 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 9.39s 2025-09-13 01:11:15.231611 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 9.22s 2025-09-13 01:11:15.231618 | orchestrator | 2025-09-13 01:11:15 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:11:18.267332 | orchestrator | 2025-09-13 01:11:18 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:11:18.267469 | orchestrator | 2025-09-13 01:11:18 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:11:21.308123 | orchestrator | 2025-09-13 01:11:21 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:11:21.308223 | orchestrator | 2025-09-13 01:11:21 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:11:24.350844 | orchestrator | 2025-09-13 01:11:24 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:11:24.350938 | orchestrator | 2025-09-13 01:11:24 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:11:27.396110 | orchestrator | 2025-09-13 01:11:27 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:11:27.396209 | orchestrator | 2025-09-13 01:11:27 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:11:30.442465 | orchestrator | 2025-09-13 01:11:30 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:11:30.442564 | orchestrator | 2025-09-13 01:11:30 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:11:33.491904 | orchestrator | 2025-09-13 01:11:33 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:11:33.492006 | orchestrator | 2025-09-13 01:11:33 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:11:36.532366 | orchestrator | 2025-09-13 01:11:36 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:11:36.532972 | orchestrator | 2025-09-13 01:11:36 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:11:39.573235 | orchestrator | 2025-09-13 01:11:39 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:11:39.573349 | orchestrator | 2025-09-13 01:11:39 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:11:42.609389 | orchestrator | 2025-09-13 01:11:42 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:11:42.609480 | orchestrator | 2025-09-13 01:11:42 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:11:45.657615 | orchestrator | 2025-09-13 01:11:45 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:11:45.657708 | orchestrator | 2025-09-13 01:11:45 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:11:48.716065 | orchestrator | 2025-09-13 01:11:48 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:11:48.716944 | orchestrator | 2025-09-13 01:11:48 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:11:51.761557 | orchestrator | 2025-09-13 01:11:51 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:11:51.761649 | orchestrator | 2025-09-13 01:11:51 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:11:54.813303 | orchestrator | 2025-09-13 01:11:54 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:11:54.813415 | orchestrator | 2025-09-13 01:11:54 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:11:57.876673 | orchestrator | 2025-09-13 01:11:57 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:11:57.876811 | orchestrator | 2025-09-13 01:11:57 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:12:00.919406 | orchestrator | 2025-09-13 01:12:00 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:12:00.919493 | orchestrator | 2025-09-13 01:12:00 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:12:03.968542 | orchestrator | 2025-09-13 01:12:03 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:12:03.968637 | orchestrator | 2025-09-13 01:12:03 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:12:07.020266 | orchestrator | 2025-09-13 01:12:07 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:12:07.020368 | orchestrator | 2025-09-13 01:12:07 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:12:10.057879 | orchestrator | 2025-09-13 01:12:10 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:12:10.057964 | orchestrator | 2025-09-13 01:12:10 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:12:13.106962 | orchestrator | 2025-09-13 01:12:13 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:12:13.107098 | orchestrator | 2025-09-13 01:12:13 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:12:16.148686 | orchestrator | 2025-09-13 01:12:16 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:12:16.148790 | orchestrator | 2025-09-13 01:12:16 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:12:19.191832 | orchestrator | 2025-09-13 01:12:19 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:12:19.191932 | orchestrator | 2025-09-13 01:12:19 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:12:22.239204 | orchestrator | 2025-09-13 01:12:22 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:12:22.239308 | orchestrator | 2025-09-13 01:12:22 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:12:25.285546 | orchestrator | 2025-09-13 01:12:25 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:12:25.285628 | orchestrator | 2025-09-13 01:12:25 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:12:28.320615 | orchestrator | 2025-09-13 01:12:28 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:12:28.320725 | orchestrator | 2025-09-13 01:12:28 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:12:31.359952 | orchestrator | 2025-09-13 01:12:31 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:12:31.360110 | orchestrator | 2025-09-13 01:12:31 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:12:34.406979 | orchestrator | 2025-09-13 01:12:34 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:12:34.407115 | orchestrator | 2025-09-13 01:12:34 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:12:37.456546 | orchestrator | 2025-09-13 01:12:37 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:12:37.456643 | orchestrator | 2025-09-13 01:12:37 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:12:40.497271 | orchestrator | 2025-09-13 01:12:40 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:12:40.497373 | orchestrator | 2025-09-13 01:12:40 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:12:43.542647 | orchestrator | 2025-09-13 01:12:43 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:12:43.542771 | orchestrator | 2025-09-13 01:12:43 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:12:46.588435 | orchestrator | 2025-09-13 01:12:46 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:12:46.588548 | orchestrator | 2025-09-13 01:12:46 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:12:49.638476 | orchestrator | 2025-09-13 01:12:49 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:12:49.638583 | orchestrator | 2025-09-13 01:12:49 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:12:52.699131 | orchestrator | 2025-09-13 01:12:52 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:12:52.699253 | orchestrator | 2025-09-13 01:12:52 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:12:55.740784 | orchestrator | 2025-09-13 01:12:55 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:12:55.740884 | orchestrator | 2025-09-13 01:12:55 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:12:58.772291 | orchestrator | 2025-09-13 01:12:58 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:12:58.772392 | orchestrator | 2025-09-13 01:12:58 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:13:01.807703 | orchestrator | 2025-09-13 01:13:01 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:13:01.807804 | orchestrator | 2025-09-13 01:13:01 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:13:04.845357 | orchestrator | 2025-09-13 01:13:04 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:13:04.845464 | orchestrator | 2025-09-13 01:13:04 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:13:07.891610 | orchestrator | 2025-09-13 01:13:07 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:13:07.891710 | orchestrator | 2025-09-13 01:13:07 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:13:10.923593 | orchestrator | 2025-09-13 01:13:10 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:13:10.923692 | orchestrator | 2025-09-13 01:13:10 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:13:13.968148 | orchestrator | 2025-09-13 01:13:13 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:13:13.968271 | orchestrator | 2025-09-13 01:13:13 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:13:17.029577 | orchestrator | 2025-09-13 01:13:17 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:13:17.029675 | orchestrator | 2025-09-13 01:13:17 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:13:20.086303 | orchestrator | 2025-09-13 01:13:20 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:13:20.086394 | orchestrator | 2025-09-13 01:13:20 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:13:23.130148 | orchestrator | 2025-09-13 01:13:23 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:13:23.130245 | orchestrator | 2025-09-13 01:13:23 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:13:26.177677 | orchestrator | 2025-09-13 01:13:26 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:13:26.177780 | orchestrator | 2025-09-13 01:13:26 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:13:29.221809 | orchestrator | 2025-09-13 01:13:29 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:13:29.221939 | orchestrator | 2025-09-13 01:13:29 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:13:32.263502 | orchestrator | 2025-09-13 01:13:32 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:13:32.263601 | orchestrator | 2025-09-13 01:13:32 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:13:35.306813 | orchestrator | 2025-09-13 01:13:35 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:13:35.306909 | orchestrator | 2025-09-13 01:13:35 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:13:38.353830 | orchestrator | 2025-09-13 01:13:38 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:13:38.353928 | orchestrator | 2025-09-13 01:13:38 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:13:41.398512 | orchestrator | 2025-09-13 01:13:41 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:13:41.398614 | orchestrator | 2025-09-13 01:13:41 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:13:44.443295 | orchestrator | 2025-09-13 01:13:44 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:13:44.443397 | orchestrator | 2025-09-13 01:13:44 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:13:47.486128 | orchestrator | 2025-09-13 01:13:47 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:13:47.486228 | orchestrator | 2025-09-13 01:13:47 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:13:50.531905 | orchestrator | 2025-09-13 01:13:50 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:13:50.532001 | orchestrator | 2025-09-13 01:13:50 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:13:53.574389 | orchestrator | 2025-09-13 01:13:53 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:13:53.574494 | orchestrator | 2025-09-13 01:13:53 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:13:56.620739 | orchestrator | 2025-09-13 01:13:56 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:13:56.620835 | orchestrator | 2025-09-13 01:13:56 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:13:59.655595 | orchestrator | 2025-09-13 01:13:59 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:13:59.655695 | orchestrator | 2025-09-13 01:13:59 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:14:02.706776 | orchestrator | 2025-09-13 01:14:02 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:14:02.706876 | orchestrator | 2025-09-13 01:14:02 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:14:05.759447 | orchestrator | 2025-09-13 01:14:05 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:14:05.759548 | orchestrator | 2025-09-13 01:14:05 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:14:08.801579 | orchestrator | 2025-09-13 01:14:08 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:14:08.801674 | orchestrator | 2025-09-13 01:14:08 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:14:11.854213 | orchestrator | 2025-09-13 01:14:11 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:14:11.854346 | orchestrator | 2025-09-13 01:14:11 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:14:14.898475 | orchestrator | 2025-09-13 01:14:14 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:14:14.899198 | orchestrator | 2025-09-13 01:14:14 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:14:17.955629 | orchestrator | 2025-09-13 01:14:17 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:14:17.955746 | orchestrator | 2025-09-13 01:14:17 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:14:21.006735 | orchestrator | 2025-09-13 01:14:21 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state STARTED 2025-09-13 01:14:21.006841 | orchestrator | 2025-09-13 01:14:21 | INFO  | Wait 1 second(s) until the next check 2025-09-13 01:14:24.064853 | orchestrator | 2025-09-13 01:14:24 | INFO  | Task 9b5a4de8-462c-4a87-aa4f-31f0a5175087 is in state SUCCESS 2025-09-13 01:14:24.066429 | orchestrator | 2025-09-13 01:14:24.066589 | orchestrator | 2025-09-13 01:14:24.066694 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-13 01:14:24.066712 | orchestrator | 2025-09-13 01:14:24.066724 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-13 01:14:24.066736 | orchestrator | Saturday 13 September 2025 01:08:52 +0000 (0:00:00.325) 0:00:00.325 **** 2025-09-13 01:14:24.066747 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:14:24.067164 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:14:24.067444 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:14:24.067456 | orchestrator | 2025-09-13 01:14:24.067468 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-13 01:14:24.067490 | orchestrator | Saturday 13 September 2025 01:08:52 +0000 (0:00:00.390) 0:00:00.716 **** 2025-09-13 01:14:24.067502 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-09-13 01:14:24.067514 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-09-13 01:14:24.067525 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-09-13 01:14:24.067536 | orchestrator | 2025-09-13 01:14:24.067547 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-09-13 01:14:24.067558 | orchestrator | 2025-09-13 01:14:24.067569 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-13 01:14:24.067580 | orchestrator | Saturday 13 September 2025 01:08:53 +0000 (0:00:00.619) 0:00:01.335 **** 2025-09-13 01:14:24.067592 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 01:14:24.067604 | orchestrator | 2025-09-13 01:14:24.067615 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-09-13 01:14:24.067626 | orchestrator | Saturday 13 September 2025 01:08:54 +0000 (0:00:00.734) 0:00:02.069 **** 2025-09-13 01:14:24.067637 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-09-13 01:14:24.067648 | orchestrator | 2025-09-13 01:14:24.067659 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-09-13 01:14:24.067670 | orchestrator | Saturday 13 September 2025 01:08:57 +0000 (0:00:03.525) 0:00:05.594 **** 2025-09-13 01:14:24.067693 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-09-13 01:14:24.067714 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-09-13 01:14:24.067725 | orchestrator | 2025-09-13 01:14:24.067736 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-09-13 01:14:24.067747 | orchestrator | Saturday 13 September 2025 01:09:04 +0000 (0:00:07.044) 0:00:12.639 **** 2025-09-13 01:14:24.067758 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-13 01:14:24.067769 | orchestrator | 2025-09-13 01:14:24.067780 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-09-13 01:14:24.067791 | orchestrator | Saturday 13 September 2025 01:09:08 +0000 (0:00:03.353) 0:00:15.993 **** 2025-09-13 01:14:24.067802 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-13 01:14:24.067836 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-13 01:14:24.067847 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-13 01:14:24.067859 | orchestrator | 2025-09-13 01:14:24.067870 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-09-13 01:14:24.067881 | orchestrator | Saturday 13 September 2025 01:09:16 +0000 (0:00:08.519) 0:00:24.512 **** 2025-09-13 01:14:24.067892 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-13 01:14:24.067903 | orchestrator | 2025-09-13 01:14:24.067915 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-09-13 01:14:24.067926 | orchestrator | Saturday 13 September 2025 01:09:20 +0000 (0:00:03.523) 0:00:28.036 **** 2025-09-13 01:14:24.067936 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-13 01:14:24.067947 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-13 01:14:24.067958 | orchestrator | 2025-09-13 01:14:24.067969 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-09-13 01:14:24.067980 | orchestrator | Saturday 13 September 2025 01:09:28 +0000 (0:00:07.769) 0:00:35.806 **** 2025-09-13 01:14:24.067991 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-09-13 01:14:24.068001 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-09-13 01:14:24.068034 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-09-13 01:14:24.068045 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-09-13 01:14:24.068071 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-09-13 01:14:24.068084 | orchestrator | 2025-09-13 01:14:24.068097 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-13 01:14:24.068109 | orchestrator | Saturday 13 September 2025 01:09:43 +0000 (0:00:15.872) 0:00:51.679 **** 2025-09-13 01:14:24.068121 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 01:14:24.068134 | orchestrator | 2025-09-13 01:14:24.068147 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-09-13 01:14:24.068160 | orchestrator | Saturday 13 September 2025 01:09:44 +0000 (0:00:00.567) 0:00:52.246 **** 2025-09-13 01:14:24.068172 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:14:24.068185 | orchestrator | 2025-09-13 01:14:24.068198 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-09-13 01:14:24.068210 | orchestrator | Saturday 13 September 2025 01:09:49 +0000 (0:00:04.688) 0:00:56.934 **** 2025-09-13 01:14:24.068222 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:14:24.068234 | orchestrator | 2025-09-13 01:14:24.068247 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-09-13 01:14:24.068370 | orchestrator | Saturday 13 September 2025 01:09:54 +0000 (0:00:04.873) 0:01:01.808 **** 2025-09-13 01:14:24.068389 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:14:24.068402 | orchestrator | 2025-09-13 01:14:24.068415 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-09-13 01:14:24.068428 | orchestrator | Saturday 13 September 2025 01:09:57 +0000 (0:00:03.320) 0:01:05.129 **** 2025-09-13 01:14:24.068442 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-09-13 01:14:24.068454 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-09-13 01:14:24.068465 | orchestrator | 2025-09-13 01:14:24.068476 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-09-13 01:14:24.068487 | orchestrator | Saturday 13 September 2025 01:10:08 +0000 (0:00:10.885) 0:01:16.015 **** 2025-09-13 01:14:24.068498 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-09-13 01:14:24.068509 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-09-13 01:14:24.068522 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-09-13 01:14:24.068545 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-09-13 01:14:24.068556 | orchestrator | 2025-09-13 01:14:24.068567 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-09-13 01:14:24.068578 | orchestrator | Saturday 13 September 2025 01:10:25 +0000 (0:00:17.041) 0:01:33.057 **** 2025-09-13 01:14:24.068589 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:14:24.068600 | orchestrator | 2025-09-13 01:14:24.068611 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-09-13 01:14:24.068622 | orchestrator | Saturday 13 September 2025 01:10:30 +0000 (0:00:05.011) 0:01:38.068 **** 2025-09-13 01:14:24.068633 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:14:24.068644 | orchestrator | 2025-09-13 01:14:24.068655 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-09-13 01:14:24.068666 | orchestrator | Saturday 13 September 2025 01:10:35 +0000 (0:00:05.723) 0:01:43.791 **** 2025-09-13 01:14:24.068677 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:14:24.068687 | orchestrator | 2025-09-13 01:14:24.068698 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-09-13 01:14:24.068709 | orchestrator | Saturday 13 September 2025 01:10:36 +0000 (0:00:00.303) 0:01:44.095 **** 2025-09-13 01:14:24.068720 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:14:24.068731 | orchestrator | 2025-09-13 01:14:24.068742 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-13 01:14:24.068773 | orchestrator | Saturday 13 September 2025 01:10:42 +0000 (0:00:05.812) 0:01:49.907 **** 2025-09-13 01:14:24.068785 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 01:14:24.068797 | orchestrator | 2025-09-13 01:14:24.068808 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-09-13 01:14:24.068819 | orchestrator | Saturday 13 September 2025 01:10:42 +0000 (0:00:00.874) 0:01:50.782 **** 2025-09-13 01:14:24.068830 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:14:24.068841 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:14:24.068852 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:14:24.068863 | orchestrator | 2025-09-13 01:14:24.068874 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-09-13 01:14:24.068885 | orchestrator | Saturday 13 September 2025 01:10:48 +0000 (0:00:05.245) 0:01:56.027 **** 2025-09-13 01:14:24.068896 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:14:24.068907 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:14:24.068918 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:14:24.068929 | orchestrator | 2025-09-13 01:14:24.068939 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-09-13 01:14:24.068950 | orchestrator | Saturday 13 September 2025 01:10:53 +0000 (0:00:05.230) 0:02:01.257 **** 2025-09-13 01:14:24.068961 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:14:24.068972 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:14:24.068983 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:14:24.068994 | orchestrator | 2025-09-13 01:14:24.069005 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-09-13 01:14:24.069050 | orchestrator | Saturday 13 September 2025 01:10:54 +0000 (0:00:00.817) 0:02:02.075 **** 2025-09-13 01:14:24.069061 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:14:24.069077 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:14:24.069089 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:14:24.069100 | orchestrator | 2025-09-13 01:14:24.069118 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-09-13 01:14:24.069129 | orchestrator | Saturday 13 September 2025 01:10:56 +0000 (0:00:02.015) 0:02:04.091 **** 2025-09-13 01:14:24.069140 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:14:24.069159 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:14:24.069170 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:14:24.069181 | orchestrator | 2025-09-13 01:14:24.069192 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-09-13 01:14:24.069202 | orchestrator | Saturday 13 September 2025 01:10:57 +0000 (0:00:01.436) 0:02:05.528 **** 2025-09-13 01:14:24.069213 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:14:24.069224 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:14:24.069235 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:14:24.069245 | orchestrator | 2025-09-13 01:14:24.069256 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-09-13 01:14:24.069267 | orchestrator | Saturday 13 September 2025 01:10:58 +0000 (0:00:01.146) 0:02:06.674 **** 2025-09-13 01:14:24.069278 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:14:24.069289 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:14:24.069300 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:14:24.069310 | orchestrator | 2025-09-13 01:14:24.069354 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-09-13 01:14:24.069367 | orchestrator | Saturday 13 September 2025 01:11:00 +0000 (0:00:01.983) 0:02:08.658 **** 2025-09-13 01:14:24.069378 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:14:24.069388 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:14:24.069399 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:14:24.069410 | orchestrator | 2025-09-13 01:14:24.069421 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-09-13 01:14:24.069432 | orchestrator | Saturday 13 September 2025 01:11:02 +0000 (0:00:01.583) 0:02:10.241 **** 2025-09-13 01:14:24.069442 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:14:24.069453 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:14:24.069464 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:14:24.069475 | orchestrator | 2025-09-13 01:14:24.069486 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-09-13 01:14:24.069497 | orchestrator | Saturday 13 September 2025 01:11:03 +0000 (0:00:00.898) 0:02:11.140 **** 2025-09-13 01:14:24.069508 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:14:24.069519 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:14:24.069529 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:14:24.069540 | orchestrator | 2025-09-13 01:14:24.069551 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-13 01:14:24.069562 | orchestrator | Saturday 13 September 2025 01:11:06 +0000 (0:00:02.747) 0:02:13.887 **** 2025-09-13 01:14:24.069573 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 01:14:24.069584 | orchestrator | 2025-09-13 01:14:24.069595 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-09-13 01:14:24.069605 | orchestrator | Saturday 13 September 2025 01:11:06 +0000 (0:00:00.532) 0:02:14.420 **** 2025-09-13 01:14:24.069616 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:14:24.069627 | orchestrator | 2025-09-13 01:14:24.069638 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-09-13 01:14:24.069649 | orchestrator | Saturday 13 September 2025 01:11:11 +0000 (0:00:04.440) 0:02:18.861 **** 2025-09-13 01:14:24.069660 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:14:24.069671 | orchestrator | 2025-09-13 01:14:24.069682 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-09-13 01:14:24.069693 | orchestrator | Saturday 13 September 2025 01:11:14 +0000 (0:00:03.111) 0:02:21.973 **** 2025-09-13 01:14:24.069704 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-09-13 01:14:24.069715 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-09-13 01:14:24.069726 | orchestrator | 2025-09-13 01:14:24.069737 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-09-13 01:14:24.069748 | orchestrator | Saturday 13 September 2025 01:11:20 +0000 (0:00:06.632) 0:02:28.605 **** 2025-09-13 01:14:24.069759 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:14:24.069778 | orchestrator | 2025-09-13 01:14:24.069789 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-09-13 01:14:24.069800 | orchestrator | Saturday 13 September 2025 01:11:24 +0000 (0:00:03.381) 0:02:31.987 **** 2025-09-13 01:14:24.069810 | orchestrator | ok: [testbed-node-0] 2025-09-13 01:14:24.069821 | orchestrator | ok: [testbed-node-1] 2025-09-13 01:14:24.069832 | orchestrator | ok: [testbed-node-2] 2025-09-13 01:14:24.069843 | orchestrator | 2025-09-13 01:14:24.069853 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-09-13 01:14:24.069864 | orchestrator | Saturday 13 September 2025 01:11:24 +0000 (0:00:00.301) 0:02:32.289 **** 2025-09-13 01:14:24.069879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-13 01:14:24.069922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-13 01:14:24.069935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-13 01:14:24.069948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-13 01:14:24.069968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-13 01:14:24.070093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-13 01:14:24.070113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-13 01:14:24.070130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-13 01:14:24.070177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-13 01:14:24.070190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-13 01:14:24.070202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-13 01:14:24.070224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-13 01:14:24.070236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:14:24.070254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:14:24.070266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:14:24.070278 | orchestrator | 2025-09-13 01:14:24.070289 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-09-13 01:14:24.070301 | orchestrator | Saturday 13 September 2025 01:11:27 +0000 (0:00:02.600) 0:02:34.890 **** 2025-09-13 01:14:24.070312 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:14:24.070323 | orchestrator | 2025-09-13 01:14:24.070359 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-09-13 01:14:24.070373 | orchestrator | Saturday 13 September 2025 01:11:27 +0000 (0:00:00.143) 0:02:35.033 **** 2025-09-13 01:14:24.070384 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:14:24.070395 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:14:24.070406 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:14:24.070416 | orchestrator | 2025-09-13 01:14:24.070427 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-09-13 01:14:24.070438 | orchestrator | Saturday 13 September 2025 01:11:27 +0000 (0:00:00.471) 0:02:35.504 **** 2025-09-13 01:14:24.070450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-13 01:14:24.070470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-13 01:14:24.070482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-13 01:14:24.070493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-13 01:14:24.070510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-13 01:14:24.070522 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:14:24.070561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-13 01:14:24.070574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-13 01:14:24.070592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-13 01:14:24.070603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-13 01:14:24.070615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-13 01:14:24.070627 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:14:24.070649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-13 01:14:24.070687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-13 01:14:24.070700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-13 01:14:24.070720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-13 01:14:24.070731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-13 01:14:24.070743 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:14:24.070754 | orchestrator | 2025-09-13 01:14:24.070765 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-13 01:14:24.070776 | orchestrator | Saturday 13 September 2025 01:11:28 +0000 (0:00:00.672) 0:02:36.177 **** 2025-09-13 01:14:24.070787 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-13 01:14:24.070798 | orchestrator | 2025-09-13 01:14:24.070809 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-09-13 01:14:24.070820 | orchestrator | Saturday 13 September 2025 01:11:28 +0000 (0:00:00.524) 0:02:36.701 **** 2025-09-13 01:14:24.070836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-13 01:14:24.070874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 2025-09-13 01:14:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-13 01:14:24.070896 | orchestrator | 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-13 01:14:24.070909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-13 01:14:24.070921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-13 01:14:24.070933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-13 01:14:24.070944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-13 01:14:24.070961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-13 01:14:24.070997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-13 01:14:24.071039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-13 01:14:24.071051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-13 01:14:24.071062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-13 01:14:24.071074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-13 01:14:24.071085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:14:24.071103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:14:24.071121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:14:24.071143 | orchestrator | 2025-09-13 01:14:24.071154 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-09-13 01:14:24.071165 | orchestrator | Saturday 13 September 2025 01:11:34 +0000 (0:00:05.165) 0:02:41.867 **** 2025-09-13 01:14:24.071176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-13 01:14:24.071188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-13 01:14:24.071200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-13 01:14:24.071211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-13 01:14:24.071227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-13 01:14:24.071246 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:14:24.071267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-13 01:14:24.071279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-13 01:14:24.071290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-13 01:14:24.071302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-13 01:14:24.071313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-13 01:14:24.071325 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:14:24.071341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-13 01:14:24.071366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-13 01:14:24.071378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-13 01:14:24.071389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-13 01:14:24.071401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-13 01:14:24.071413 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:14:24.071424 | orchestrator | 2025-09-13 01:14:24.071435 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-09-13 01:14:24.071446 | orchestrator | Saturday 13 September 2025 01:11:34 +0000 (0:00:00.876) 0:02:42.744 **** 2025-09-13 01:14:24.071457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-13 01:14:24.071480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-13 01:14:24.071498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-13 01:14:24.071510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-13 01:14:24.071521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-13 01:14:24.071533 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:14:24.071544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-13 01:14:24.071556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-13 01:14:24.071573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-13 01:14:24.071595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-13 01:14:24.071614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-13 01:14:24.071626 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:14:24.071637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-13 01:14:24.071649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-13 01:14:24.071660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-13 01:14:24.071671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-13 01:14:24.071693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-13 01:14:24.071705 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:14:24.071716 | orchestrator | 2025-09-13 01:14:24.071727 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-09-13 01:14:24.071738 | orchestrator | Saturday 13 September 2025 01:11:35 +0000 (0:00:00.879) 0:02:43.623 **** 2025-09-13 01:14:24.071757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-13 01:14:24.071770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-13 01:14:24.071782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-13 01:14:24.071800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-13 01:14:24.071816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-13 01:14:24.071834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-13 01:14:24.071846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-13 01:14:24.071857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-13 01:14:24.071869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-13 01:14:24.071880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-13 01:14:24.071898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-13 01:14:24.071915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-13 01:14:24.071934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:14:24.071945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:14:24.071956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:14:24.071968 | orchestrator | 2025-09-13 01:14:24.071979 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-09-13 01:14:24.071990 | orchestrator | Saturday 13 September 2025 01:11:40 +0000 (0:00:05.053) 0:02:48.676 **** 2025-09-13 01:14:24.072001 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-13 01:14:24.072070 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-13 01:14:24.072083 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-13 01:14:24.072101 | orchestrator | 2025-09-13 01:14:24.072113 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-09-13 01:14:24.072124 | orchestrator | Saturday 13 September 2025 01:11:43 +0000 (0:00:02.131) 0:02:50.808 **** 2025-09-13 01:14:24.072135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-13 01:14:24.072152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-13 01:14:24.072172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-13 01:14:24.072183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-13 01:14:24.072195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-13 01:14:24.072213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-13 01:14:24.072225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-13 01:14:24.072241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-13 01:14:24.072259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-13 01:14:24.072271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-13 01:14:24.072283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-13 01:14:24.072294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-13 01:14:24.072313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:14:24.072324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:14:24.072341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:14:24.072352 | orchestrator | 2025-09-13 01:14:24.072363 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-09-13 01:14:24.072375 | orchestrator | Saturday 13 September 2025 01:11:59 +0000 (0:00:16.447) 0:03:07.256 **** 2025-09-13 01:14:24.072386 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:14:24.072397 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:14:24.072408 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:14:24.072419 | orchestrator | 2025-09-13 01:14:24.072430 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-09-13 01:14:24.072440 | orchestrator | Saturday 13 September 2025 01:12:00 +0000 (0:00:01.516) 0:03:08.773 **** 2025-09-13 01:14:24.072457 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-13 01:14:24.072468 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-13 01:14:24.072479 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-13 01:14:24.072490 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-13 01:14:24.072501 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-13 01:14:24.072512 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-13 01:14:24.072523 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-13 01:14:24.072534 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-13 01:14:24.072544 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-13 01:14:24.072555 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-13 01:14:24.072566 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-13 01:14:24.072577 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-13 01:14:24.072593 | orchestrator | 2025-09-13 01:14:24.072603 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-09-13 01:14:24.072613 | orchestrator | Saturday 13 September 2025 01:12:06 +0000 (0:00:05.398) 0:03:14.171 **** 2025-09-13 01:14:24.072623 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-13 01:14:24.072632 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-13 01:14:24.072642 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-13 01:14:24.072652 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-13 01:14:24.072661 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-13 01:14:24.072671 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-13 01:14:24.072681 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-13 01:14:24.072690 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-13 01:14:24.072700 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-13 01:14:24.072710 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-13 01:14:24.072719 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-13 01:14:24.072729 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-13 01:14:24.072739 | orchestrator | 2025-09-13 01:14:24.072748 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-09-13 01:14:24.072758 | orchestrator | Saturday 13 September 2025 01:12:11 +0000 (0:00:05.311) 0:03:19.483 **** 2025-09-13 01:14:24.072768 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-13 01:14:24.072777 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-13 01:14:24.072787 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-13 01:14:24.072797 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-13 01:14:24.072807 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-13 01:14:24.072817 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-13 01:14:24.072826 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-13 01:14:24.072836 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-13 01:14:24.072845 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-13 01:14:24.072855 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-13 01:14:24.072865 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-13 01:14:24.072874 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-13 01:14:24.072884 | orchestrator | 2025-09-13 01:14:24.072894 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-09-13 01:14:24.072904 | orchestrator | Saturday 13 September 2025 01:12:16 +0000 (0:00:05.099) 0:03:24.582 **** 2025-09-13 01:14:24.072918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-13 01:14:24.072935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-13 01:14:24.072954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-13 01:14:24.072965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-13 01:14:24.072975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-13 01:14:24.072985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-13 01:14:24.073000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-13 01:14:24.073067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-13 01:14:24.073080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-13 01:14:24.073090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-13 01:14:24.073100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-13 01:14:24.073110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-13 01:14:24.073120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:14:24.073137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:14:24.073158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-13 01:14:24.073169 | orchestrator | 2025-09-13 01:14:24.073179 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-13 01:14:24.073188 | orchestrator | Saturday 13 September 2025 01:12:20 +0000 (0:00:03.621) 0:03:28.204 **** 2025-09-13 01:14:24.073198 | orchestrator | skipping: [testbed-node-0] 2025-09-13 01:14:24.073208 | orchestrator | skipping: [testbed-node-1] 2025-09-13 01:14:24.073218 | orchestrator | skipping: [testbed-node-2] 2025-09-13 01:14:24.073227 | orchestrator | 2025-09-13 01:14:24.073237 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-09-13 01:14:24.073247 | orchestrator | Saturday 13 September 2025 01:12:20 +0000 (0:00:00.359) 0:03:28.563 **** 2025-09-13 01:14:24.073256 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:14:24.073266 | orchestrator | 2025-09-13 01:14:24.073275 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-09-13 01:14:24.073285 | orchestrator | Saturday 13 September 2025 01:12:22 +0000 (0:00:02.093) 0:03:30.657 **** 2025-09-13 01:14:24.073295 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:14:24.073304 | orchestrator | 2025-09-13 01:14:24.073314 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-09-13 01:14:24.073323 | orchestrator | Saturday 13 September 2025 01:12:24 +0000 (0:00:02.020) 0:03:32.678 **** 2025-09-13 01:14:24.073333 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:14:24.073343 | orchestrator | 2025-09-13 01:14:24.073352 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-09-13 01:14:24.073362 | orchestrator | Saturday 13 September 2025 01:12:27 +0000 (0:00:02.211) 0:03:34.889 **** 2025-09-13 01:14:24.073371 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:14:24.073381 | orchestrator | 2025-09-13 01:14:24.073391 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-09-13 01:14:24.073400 | orchestrator | Saturday 13 September 2025 01:12:29 +0000 (0:00:02.210) 0:03:37.100 **** 2025-09-13 01:14:24.073410 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:14:24.073419 | orchestrator | 2025-09-13 01:14:24.073429 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-13 01:14:24.073439 | orchestrator | Saturday 13 September 2025 01:12:50 +0000 (0:00:21.216) 0:03:58.317 **** 2025-09-13 01:14:24.073448 | orchestrator | 2025-09-13 01:14:24.073458 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-13 01:14:24.073467 | orchestrator | Saturday 13 September 2025 01:12:50 +0000 (0:00:00.071) 0:03:58.389 **** 2025-09-13 01:14:24.073477 | orchestrator | 2025-09-13 01:14:24.073487 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-13 01:14:24.073496 | orchestrator | Saturday 13 September 2025 01:12:50 +0000 (0:00:00.071) 0:03:58.460 **** 2025-09-13 01:14:24.073506 | orchestrator | 2025-09-13 01:14:24.073515 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-09-13 01:14:24.073525 | orchestrator | Saturday 13 September 2025 01:12:50 +0000 (0:00:00.064) 0:03:58.524 **** 2025-09-13 01:14:24.073533 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:14:24.073546 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:14:24.073554 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:14:24.073562 | orchestrator | 2025-09-13 01:14:24.073569 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-09-13 01:14:24.073577 | orchestrator | Saturday 13 September 2025 01:13:02 +0000 (0:00:11.874) 0:04:10.398 **** 2025-09-13 01:14:24.073585 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:14:24.073593 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:14:24.073601 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:14:24.073608 | orchestrator | 2025-09-13 01:14:24.073616 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-09-13 01:14:24.073624 | orchestrator | Saturday 13 September 2025 01:13:08 +0000 (0:00:06.205) 0:04:16.604 **** 2025-09-13 01:14:24.073632 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:14:24.073640 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:14:24.073648 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:14:24.073656 | orchestrator | 2025-09-13 01:14:24.073663 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-09-13 01:14:24.073671 | orchestrator | Saturday 13 September 2025 01:13:14 +0000 (0:00:05.532) 0:04:22.137 **** 2025-09-13 01:14:24.073679 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:14:24.073687 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:14:24.073695 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:14:24.073703 | orchestrator | 2025-09-13 01:14:24.073711 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-09-13 01:14:24.073723 | orchestrator | Saturday 13 September 2025 01:14:14 +0000 (0:01:00.011) 0:05:22.149 **** 2025-09-13 01:14:24.073731 | orchestrator | changed: [testbed-node-1] 2025-09-13 01:14:24.073739 | orchestrator | changed: [testbed-node-2] 2025-09-13 01:14:24.073747 | orchestrator | changed: [testbed-node-0] 2025-09-13 01:14:24.073755 | orchestrator | 2025-09-13 01:14:24.073762 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-13 01:14:24.073771 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-13 01:14:24.073779 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-13 01:14:24.073787 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-13 01:14:24.073795 | orchestrator | 2025-09-13 01:14:24.073803 | orchestrator | 2025-09-13 01:14:24.073811 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-13 01:14:24.073824 | orchestrator | Saturday 13 September 2025 01:14:22 +0000 (0:00:08.451) 0:05:30.600 **** 2025-09-13 01:14:24.073832 | orchestrator | =============================================================================== 2025-09-13 01:14:24.073840 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 60.01s 2025-09-13 01:14:24.073848 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 21.22s 2025-09-13 01:14:24.073856 | orchestrator | octavia : Add rules for security groups -------------------------------- 17.04s 2025-09-13 01:14:24.073864 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.45s 2025-09-13 01:14:24.073872 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.87s 2025-09-13 01:14:24.073880 | orchestrator | octavia : Restart octavia-api container -------------------------------- 11.87s 2025-09-13 01:14:24.073887 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.89s 2025-09-13 01:14:24.073895 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.52s 2025-09-13 01:14:24.073903 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 8.45s 2025-09-13 01:14:24.073911 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.77s 2025-09-13 01:14:24.073924 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 7.04s 2025-09-13 01:14:24.073932 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.63s 2025-09-13 01:14:24.073940 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 6.21s 2025-09-13 01:14:24.073948 | orchestrator | octavia : Update loadbalancer management subnet ------------------------- 5.81s 2025-09-13 01:14:24.073956 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.72s 2025-09-13 01:14:24.073964 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.53s 2025-09-13 01:14:24.073972 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.40s 2025-09-13 01:14:24.073980 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.31s 2025-09-13 01:14:24.073987 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.25s 2025-09-13 01:14:24.073995 | orchestrator | octavia : Update Octavia health manager port host_id -------------------- 5.23s 2025-09-13 01:14:27.113113 | orchestrator | 2025-09-13 01:14:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-13 01:14:30.156435 | orchestrator | 2025-09-13 01:14:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-13 01:14:33.195265 | orchestrator | 2025-09-13 01:14:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-13 01:14:36.232926 | orchestrator | 2025-09-13 01:14:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-13 01:14:39.275473 | orchestrator | 2025-09-13 01:14:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-13 01:14:42.318217 | orchestrator | 2025-09-13 01:14:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-13 01:14:45.368235 | orchestrator | 2025-09-13 01:14:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-13 01:14:48.408963 | orchestrator | 2025-09-13 01:14:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-13 01:14:51.451632 | orchestrator | 2025-09-13 01:14:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-13 01:14:54.492427 | orchestrator | 2025-09-13 01:14:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-13 01:14:57.543270 | orchestrator | 2025-09-13 01:14:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-13 01:15:00.585777 | orchestrator | 2025-09-13 01:15:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-13 01:15:03.623396 | orchestrator | 2025-09-13 01:15:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-13 01:15:06.664651 | orchestrator | 2025-09-13 01:15:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-13 01:15:09.711848 | orchestrator | 2025-09-13 01:15:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-13 01:15:12.760592 | orchestrator | 2025-09-13 01:15:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-13 01:15:15.803736 | orchestrator | 2025-09-13 01:15:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-13 01:15:18.845385 | orchestrator | 2025-09-13 01:15:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-13 01:15:21.887616 | orchestrator | 2025-09-13 01:15:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-13 01:15:24.931321 | orchestrator | 2025-09-13 01:15:25.255878 | orchestrator | 2025-09-13 01:15:25.260237 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sat Sep 13 01:15:25 UTC 2025 2025-09-13 01:15:25.260271 | orchestrator | 2025-09-13 01:15:25.564452 | orchestrator | ok: Runtime: 0:34:03.054328 2025-09-13 01:15:25.815685 | 2025-09-13 01:15:25.815866 | TASK [Bootstrap services] 2025-09-13 01:15:26.547874 | orchestrator | 2025-09-13 01:15:26.548116 | orchestrator | # BOOTSTRAP 2025-09-13 01:15:26.548153 | orchestrator | 2025-09-13 01:15:26.548176 | orchestrator | + set -e 2025-09-13 01:15:26.548197 | orchestrator | + echo 2025-09-13 01:15:26.548220 | orchestrator | + echo '# BOOTSTRAP' 2025-09-13 01:15:26.548251 | orchestrator | + echo 2025-09-13 01:15:26.548314 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-09-13 01:15:26.555168 | orchestrator | + set -e 2025-09-13 01:15:26.555202 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-09-13 01:15:31.074586 | orchestrator | 2025-09-13 01:15:31 | INFO  | It takes a moment until task c722d9ae-5fb6-4bb5-9395-e6d952599dec (flavor-manager) has been started and output is visible here. 2025-09-13 01:15:34.742341 | orchestrator | ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ 2025-09-13 01:15:34.742437 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_flavor_manager/main.py:194 │ 2025-09-13 01:15:34.742461 | orchestrator | │ in run │ 2025-09-13 01:15:34.742474 | orchestrator | │ │ 2025-09-13 01:15:34.742486 | orchestrator | │ 191 │ logger.add(sys.stderr, format=log_fmt, level=level, colorize=True) │ 2025-09-13 01:15:34.742508 | orchestrator | │ 192 │ │ 2025-09-13 01:15:34.742519 | orchestrator | │ 193 │ definitions = get_flavor_definitions(name, url) │ 2025-09-13 01:15:34.742532 | orchestrator | │ ❱ 194 │ manager = FlavorManager( │ 2025-09-13 01:15:34.742543 | orchestrator | │ 195 │ │ cloud=Cloud(cloud), │ 2025-09-13 01:15:34.742554 | orchestrator | │ 196 │ │ definitions=definitions, │ 2025-09-13 01:15:34.742565 | orchestrator | │ 197 │ │ recommended=recommended, │ 2025-09-13 01:15:34.742576 | orchestrator | │ │ 2025-09-13 01:15:34.742588 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-09-13 01:15:34.742611 | orchestrator | │ │ cloud = 'admin' │ │ 2025-09-13 01:15:34.742622 | orchestrator | │ │ debug = False │ │ 2025-09-13 01:15:34.742633 | orchestrator | │ │ definitions = { │ │ 2025-09-13 01:15:34.742644 | orchestrator | │ │ │ 'reference': [ │ │ 2025-09-13 01:15:34.742655 | orchestrator | │ │ │ │ {'field': 'name', 'mandatory_prefix': 'SCS-'}, │ │ 2025-09-13 01:15:34.742666 | orchestrator | │ │ │ │ {'field': 'cpus'}, │ │ 2025-09-13 01:15:34.742677 | orchestrator | │ │ │ │ {'field': 'ram'}, │ │ 2025-09-13 01:15:34.742688 | orchestrator | │ │ │ │ {'field': 'disk'}, │ │ 2025-09-13 01:15:34.742699 | orchestrator | │ │ │ │ {'field': 'public', 'default': True}, │ │ 2025-09-13 01:15:34.742710 | orchestrator | │ │ │ │ {'field': 'disabled', 'default': False} │ │ 2025-09-13 01:15:34.742721 | orchestrator | │ │ │ ], │ │ 2025-09-13 01:15:34.742732 | orchestrator | │ │ │ 'mandatory': [ │ │ 2025-09-13 01:15:34.742743 | orchestrator | │ │ │ │ { │ │ 2025-09-13 01:15:34.742754 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1', │ │ 2025-09-13 01:15:34.742788 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-13 01:15:34.742800 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-13 01:15:34.742811 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-13 01:15:34.742821 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-13 01:15:34.742832 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-13 01:15:34.742843 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:1', │ │ 2025-09-13 01:15:34.742854 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-1', │ │ 2025-09-13 01:15:34.742864 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-13 01:15:34.742875 | orchestrator | │ │ │ │ }, │ │ 2025-09-13 01:15:34.742886 | orchestrator | │ │ │ │ { │ │ 2025-09-13 01:15:34.742897 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1-5', │ │ 2025-09-13 01:15:34.742908 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-13 01:15:34.742918 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-13 01:15:34.742929 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-13 01:15:34.742940 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-13 01:15:34.742991 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-13 01:15:34.743003 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:5', │ │ 2025-09-13 01:15:34.743014 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-5', │ │ 2025-09-13 01:15:34.743024 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-13 01:15:34.743035 | orchestrator | │ │ │ │ }, │ │ 2025-09-13 01:15:34.743046 | orchestrator | │ │ │ │ { │ │ 2025-09-13 01:15:34.743057 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2', │ │ 2025-09-13 01:15:34.743073 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-13 01:15:34.743084 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-13 01:15:34.743095 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-13 01:15:34.743105 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-13 01:15:34.743117 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-13 01:15:34.743128 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2', │ │ 2025-09-13 01:15:34.743139 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2', │ │ 2025-09-13 01:15:34.743149 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-13 01:15:34.743160 | orchestrator | │ │ │ │ }, │ │ 2025-09-13 01:15:34.743171 | orchestrator | │ │ │ │ { │ │ 2025-09-13 01:15:34.743182 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2-5', │ │ 2025-09-13 01:15:34.743193 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-13 01:15:34.743211 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-13 01:15:34.743222 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-13 01:15:34.743233 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-13 01:15:34.743244 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-13 01:15:34.743255 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2:5', │ │ 2025-09-13 01:15:34.743266 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2-5', │ │ 2025-09-13 01:15:34.743276 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-13 01:15:34.743287 | orchestrator | │ │ │ │ }, │ │ 2025-09-13 01:15:34.743298 | orchestrator | │ │ │ │ { │ │ 2025-09-13 01:15:34.743309 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4', │ │ 2025-09-13 01:15:34.743319 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-13 01:15:34.743330 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-13 01:15:34.743341 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-13 01:15:34.743352 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-13 01:15:34.743362 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-13 01:15:34.743373 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4', │ │ 2025-09-13 01:15:34.743384 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4', │ │ 2025-09-13 01:15:34.743395 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-13 01:15:34.743406 | orchestrator | │ │ │ │ }, │ │ 2025-09-13 01:15:34.743417 | orchestrator | │ │ │ │ { │ │ 2025-09-13 01:15:34.743428 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4-10', │ │ 2025-09-13 01:15:34.743439 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-13 01:15:34.743455 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-13 01:15:34.743466 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-13 01:15:34.743484 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-13 01:15:34.773787 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-13 01:15:34.773825 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4:10', │ │ 2025-09-13 01:15:34.773839 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4-10', │ │ 2025-09-13 01:15:34.773851 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-13 01:15:34.773862 | orchestrator | │ │ │ │ }, │ │ 2025-09-13 01:15:34.773873 | orchestrator | │ │ │ │ { │ │ 2025-09-13 01:15:34.773884 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8', │ │ 2025-09-13 01:15:34.773895 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-13 01:15:34.773917 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-13 01:15:34.773929 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-13 01:15:34.773940 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-13 01:15:34.773973 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-13 01:15:34.773984 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8', │ │ 2025-09-13 01:15:34.773995 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8', │ │ 2025-09-13 01:15:34.774006 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-13 01:15:34.774048 | orchestrator | │ │ │ │ }, │ │ 2025-09-13 01:15:34.774061 | orchestrator | │ │ │ │ { │ │ 2025-09-13 01:15:34.774072 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8-20', │ │ 2025-09-13 01:15:34.774083 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-13 01:15:34.774094 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-13 01:15:34.774105 | orchestrator | │ │ │ │ │ 'disk': 20, │ │ 2025-09-13 01:15:34.774116 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-13 01:15:34.774127 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-13 01:15:34.774138 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8:20', │ │ 2025-09-13 01:15:34.774148 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8-20', │ │ 2025-09-13 01:15:34.774159 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-13 01:15:34.774170 | orchestrator | │ │ │ │ }, │ │ 2025-09-13 01:15:34.774181 | orchestrator | │ │ │ │ { │ │ 2025-09-13 01:15:34.774192 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4', │ │ 2025-09-13 01:15:34.774202 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-13 01:15:34.774216 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-13 01:15:34.774227 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-13 01:15:34.774237 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-13 01:15:34.774248 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-13 01:15:34.774259 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4', │ │ 2025-09-13 01:15:34.774269 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4', │ │ 2025-09-13 01:15:34.774280 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-13 01:15:34.774298 | orchestrator | │ │ │ │ }, │ │ 2025-09-13 01:15:34.774309 | orchestrator | │ │ │ │ { │ │ 2025-09-13 01:15:34.774320 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4-10', │ │ 2025-09-13 01:15:34.774330 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-13 01:15:34.774348 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-13 01:15:34.774359 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-13 01:15:34.774380 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-13 01:15:34.774391 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-13 01:15:34.774402 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4:10', │ │ 2025-09-13 01:15:34.774413 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4-10', │ │ 2025-09-13 01:15:34.774424 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-13 01:15:34.774435 | orchestrator | │ │ │ │ }, │ │ 2025-09-13 01:15:34.774446 | orchestrator | │ │ │ │ ... +19 │ │ 2025-09-13 01:15:34.774456 | orchestrator | │ │ │ ] │ │ 2025-09-13 01:15:34.774467 | orchestrator | │ │ } │ │ 2025-09-13 01:15:34.774478 | orchestrator | │ │ level = 'INFO' │ │ 2025-09-13 01:15:34.774489 | orchestrator | │ │ limit_memory = 32 │ │ 2025-09-13 01:15:34.774500 | orchestrator | │ │ log_fmt = '{time:YYYY-MM-DD HH:mm:ss} | │ │ 2025-09-13 01:15:34.774511 | orchestrator | │ │ {level: <8} | '+17 │ │ 2025-09-13 01:15:34.774522 | orchestrator | │ │ name = 'local' │ │ 2025-09-13 01:15:34.774532 | orchestrator | │ │ recommended = True │ │ 2025-09-13 01:15:34.774543 | orchestrator | │ │ url = None │ │ 2025-09-13 01:15:34.774554 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-09-13 01:15:34.774568 | orchestrator | │ │ 2025-09-13 01:15:34.774579 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_flavor_manager/main.py:101 │ 2025-09-13 01:15:34.774589 | orchestrator | │ in __init__ │ 2025-09-13 01:15:34.774600 | orchestrator | │ │ 2025-09-13 01:15:34.774611 | orchestrator | │ 98 │ │ self.required_flavors = definitions["mandatory"] │ 2025-09-13 01:15:34.774622 | orchestrator | │ 99 │ │ self.cloud = cloud │ 2025-09-13 01:15:34.774632 | orchestrator | │ 100 │ │ if recommended: │ 2025-09-13 01:15:34.774643 | orchestrator | │ ❱ 101 │ │ │ recommended_flavors = definitions["recommended"] │ 2025-09-13 01:15:34.774654 | orchestrator | │ 102 │ │ │ # Filter recommended flavors based on memory limit │ 2025-09-13 01:15:34.774664 | orchestrator | │ 103 │ │ │ limit_memory_mb = limit_memory * 1024 │ 2025-09-13 01:15:34.774675 | orchestrator | │ 104 │ │ │ filtered_recommended = [ │ 2025-09-13 01:15:34.774686 | orchestrator | │ │ 2025-09-13 01:15:34.774701 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-09-13 01:15:34.774720 | orchestrator | │ │ cloud = │ │ 2025-09-13 01:15:34.774741 | orchestrator | │ │ definitions = { │ │ 2025-09-13 01:15:34.774752 | orchestrator | │ │ │ 'reference': [ │ │ 2025-09-13 01:15:34.774763 | orchestrator | │ │ │ │ {'field': 'name', 'mandatory_prefix': 'SCS-'}, │ │ 2025-09-13 01:15:34.774774 | orchestrator | │ │ │ │ {'field': 'cpus'}, │ │ 2025-09-13 01:15:34.774785 | orchestrator | │ │ │ │ {'field': 'ram'}, │ │ 2025-09-13 01:15:34.774795 | orchestrator | │ │ │ │ {'field': 'disk'}, │ │ 2025-09-13 01:15:34.774806 | orchestrator | │ │ │ │ {'field': 'public', 'default': True}, │ │ 2025-09-13 01:15:34.774817 | orchestrator | │ │ │ │ {'field': 'disabled', 'default': False} │ │ 2025-09-13 01:15:34.774828 | orchestrator | │ │ │ ], │ │ 2025-09-13 01:15:34.774838 | orchestrator | │ │ │ 'mandatory': [ │ │ 2025-09-13 01:15:34.774854 | orchestrator | │ │ │ │ { │ │ 2025-09-13 01:15:34.803475 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1', │ │ 2025-09-13 01:15:34.803505 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-13 01:15:34.803516 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-13 01:15:34.803527 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-13 01:15:34.803538 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-13 01:15:34.803548 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-13 01:15:34.803559 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:1', │ │ 2025-09-13 01:15:34.803570 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-1', │ │ 2025-09-13 01:15:34.803581 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-13 01:15:34.803592 | orchestrator | │ │ │ │ }, │ │ 2025-09-13 01:15:34.803603 | orchestrator | │ │ │ │ { │ │ 2025-09-13 01:15:34.803613 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1-5', │ │ 2025-09-13 01:15:34.803624 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-13 01:15:34.803635 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-13 01:15:34.803646 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-13 01:15:34.803656 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-13 01:15:34.803667 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-13 01:15:34.803678 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:5', │ │ 2025-09-13 01:15:34.803688 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-5', │ │ 2025-09-13 01:15:34.803699 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-13 01:15:34.803721 | orchestrator | │ │ │ │ }, │ │ 2025-09-13 01:15:34.803731 | orchestrator | │ │ │ │ { │ │ 2025-09-13 01:15:34.803742 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2', │ │ 2025-09-13 01:15:34.803753 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-13 01:15:34.803764 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-13 01:15:34.803775 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-13 01:15:34.803785 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-13 01:15:34.803796 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-13 01:15:34.803807 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2', │ │ 2025-09-13 01:15:34.803817 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2', │ │ 2025-09-13 01:15:34.803828 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-13 01:15:34.803839 | orchestrator | │ │ │ │ }, │ │ 2025-09-13 01:15:34.803856 | orchestrator | │ │ │ │ { │ │ 2025-09-13 01:15:34.803867 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2-5', │ │ 2025-09-13 01:15:34.803877 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-13 01:15:34.803888 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-13 01:15:34.803899 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-13 01:15:34.803909 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-13 01:15:34.803920 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-13 01:15:34.803931 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2:5', │ │ 2025-09-13 01:15:34.803942 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2-5', │ │ 2025-09-13 01:15:34.803981 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-13 01:15:34.803992 | orchestrator | │ │ │ │ }, │ │ 2025-09-13 01:15:34.804012 | orchestrator | │ │ │ │ { │ │ 2025-09-13 01:15:34.804024 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4', │ │ 2025-09-13 01:15:34.804034 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-13 01:15:34.804045 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-13 01:15:34.804056 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-13 01:15:34.804067 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-13 01:15:34.804078 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-13 01:15:34.804088 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4', │ │ 2025-09-13 01:15:34.804099 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4', │ │ 2025-09-13 01:15:34.804110 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-13 01:15:34.804127 | orchestrator | │ │ │ │ }, │ │ 2025-09-13 01:15:34.804138 | orchestrator | │ │ │ │ { │ │ 2025-09-13 01:15:34.804148 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4-10', │ │ 2025-09-13 01:15:34.804159 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-13 01:15:34.804170 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-13 01:15:34.804181 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-13 01:15:34.804192 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-13 01:15:34.804203 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-13 01:15:34.804214 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4:10', │ │ 2025-09-13 01:15:34.804224 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4-10', │ │ 2025-09-13 01:15:34.804235 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-13 01:15:34.804246 | orchestrator | │ │ │ │ }, │ │ 2025-09-13 01:15:34.804256 | orchestrator | │ │ │ │ { │ │ 2025-09-13 01:15:34.804267 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8', │ │ 2025-09-13 01:15:34.804278 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-13 01:15:34.804288 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-13 01:15:34.804299 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-13 01:15:34.804309 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-13 01:15:34.804321 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-13 01:15:34.804333 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8', │ │ 2025-09-13 01:15:34.804343 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8', │ │ 2025-09-13 01:15:34.804354 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-13 01:15:34.804365 | orchestrator | │ │ │ │ }, │ │ 2025-09-13 01:15:34.804375 | orchestrator | │ │ │ │ { │ │ 2025-09-13 01:15:34.804386 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8-20', │ │ 2025-09-13 01:15:34.804397 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-13 01:15:34.804408 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-13 01:15:34.804419 | orchestrator | │ │ │ │ │ 'disk': 20, │ │ 2025-09-13 01:15:34.804429 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-13 01:15:34.804440 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-13 01:15:34.804451 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8:20', │ │ 2025-09-13 01:15:34.804462 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8-20', │ │ 2025-09-13 01:15:34.804472 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-13 01:15:34.804494 | orchestrator | │ │ │ │ }, │ │ 2025-09-13 01:15:34.879302 | orchestrator | │ │ │ │ { │ │ 2025-09-13 01:15:34.879355 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4', │ │ 2025-09-13 01:15:34.879388 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-13 01:15:34.879402 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-13 01:15:34.879413 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-13 01:15:34.879424 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-13 01:15:34.879435 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-13 01:15:34.879445 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4', │ │ 2025-09-13 01:15:34.879456 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4', │ │ 2025-09-13 01:15:34.879467 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-13 01:15:34.879478 | orchestrator | │ │ │ │ }, │ │ 2025-09-13 01:15:34.879488 | orchestrator | │ │ │ │ { │ │ 2025-09-13 01:15:34.879499 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4-10', │ │ 2025-09-13 01:15:34.879509 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-13 01:15:34.879520 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-13 01:15:34.879531 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-13 01:15:34.879541 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-13 01:15:34.879552 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-13 01:15:34.879563 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4:10', │ │ 2025-09-13 01:15:34.879573 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4-10', │ │ 2025-09-13 01:15:34.879584 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-13 01:15:34.879595 | orchestrator | │ │ │ │ }, │ │ 2025-09-13 01:15:34.879606 | orchestrator | │ │ │ │ ... +19 │ │ 2025-09-13 01:15:34.879616 | orchestrator | │ │ │ ] │ │ 2025-09-13 01:15:34.879627 | orchestrator | │ │ } │ │ 2025-09-13 01:15:34.879638 | orchestrator | │ │ limit_memory = 32 │ │ 2025-09-13 01:15:34.879648 | orchestrator | │ │ recommended = True │ │ 2025-09-13 01:15:34.879659 | orchestrator | │ │ self = │ │ 2025-09-13 01:15:34.879681 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-09-13 01:15:34.879695 | orchestrator | ╰──────────────────────────────────────────────────────────────────────────────╯ 2025-09-13 01:15:34.879718 | orchestrator | KeyError: 'recommended' 2025-09-13 01:15:35.363757 | orchestrator | ERROR 2025-09-13 01:15:35.364067 | orchestrator | { 2025-09-13 01:15:35.364187 | orchestrator | "delta": "0:00:09.019120", 2025-09-13 01:15:35.364255 | orchestrator | "end": "2025-09-13 01:15:35.171901", 2025-09-13 01:15:35.364316 | orchestrator | "msg": "non-zero return code", 2025-09-13 01:15:35.364370 | orchestrator | "rc": 1, 2025-09-13 01:15:35.364423 | orchestrator | "start": "2025-09-13 01:15:26.152781" 2025-09-13 01:15:35.364475 | orchestrator | } failure 2025-09-13 01:15:35.385061 | 2025-09-13 01:15:35.385250 | PLAY RECAP 2025-09-13 01:15:35.385338 | orchestrator | ok: 22 changed: 9 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2025-09-13 01:15:35.385379 | 2025-09-13 01:15:35.583324 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-09-13 01:15:35.585798 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-13 01:15:36.335339 | 2025-09-13 01:15:36.335492 | PLAY [Post output play] 2025-09-13 01:15:36.351095 | 2025-09-13 01:15:36.351238 | LOOP [stage-output : Register sources] 2025-09-13 01:15:36.417451 | 2025-09-13 01:15:36.417737 | TASK [stage-output : Check sudo] 2025-09-13 01:15:37.218250 | orchestrator | sudo: a password is required 2025-09-13 01:15:37.453614 | orchestrator | ok: Runtime: 0:00:00.013058 2025-09-13 01:15:37.470437 | 2025-09-13 01:15:37.470627 | LOOP [stage-output : Set source and destination for files and folders] 2025-09-13 01:15:37.520170 | 2025-09-13 01:15:37.520473 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-09-13 01:15:37.588706 | orchestrator | ok 2025-09-13 01:15:37.597495 | 2025-09-13 01:15:37.597637 | LOOP [stage-output : Ensure target folders exist] 2025-09-13 01:15:38.034286 | orchestrator | ok: "docs" 2025-09-13 01:15:38.034626 | 2025-09-13 01:15:38.240827 | orchestrator | ok: "artifacts" 2025-09-13 01:15:38.447902 | orchestrator | ok: "logs" 2025-09-13 01:15:38.474239 | 2025-09-13 01:15:38.474443 | LOOP [stage-output : Copy files and folders to staging folder] 2025-09-13 01:15:38.516262 | 2025-09-13 01:15:38.516572 | TASK [stage-output : Make all log files readable] 2025-09-13 01:15:38.779372 | orchestrator | ok 2025-09-13 01:15:38.788779 | 2025-09-13 01:15:38.788917 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-09-13 01:15:38.834375 | orchestrator | skipping: Conditional result was False 2025-09-13 01:15:38.851683 | 2025-09-13 01:15:38.852570 | TASK [stage-output : Discover log files for compression] 2025-09-13 01:15:38.878201 | orchestrator | skipping: Conditional result was False 2025-09-13 01:15:38.897613 | 2025-09-13 01:15:38.897771 | LOOP [stage-output : Archive everything from logs] 2025-09-13 01:15:38.941986 | 2025-09-13 01:15:38.942162 | PLAY [Post cleanup play] 2025-09-13 01:15:38.950516 | 2025-09-13 01:15:38.950621 | TASK [Set cloud fact (Zuul deployment)] 2025-09-13 01:15:39.009246 | orchestrator | ok 2025-09-13 01:15:39.020604 | 2025-09-13 01:15:39.020734 | TASK [Set cloud fact (local deployment)] 2025-09-13 01:15:39.047004 | orchestrator | skipping: Conditional result was False 2025-09-13 01:15:39.064915 | 2025-09-13 01:15:39.065155 | TASK [Clean the cloud environment] 2025-09-13 01:15:39.604445 | orchestrator | 2025-09-13 01:15:39 - clean up servers 2025-09-13 01:15:40.430745 | orchestrator | 2025-09-13 01:15:40 - testbed-manager 2025-09-13 01:15:40.523938 | orchestrator | 2025-09-13 01:15:40 - testbed-node-0 2025-09-13 01:15:40.623762 | orchestrator | 2025-09-13 01:15:40 - testbed-node-3 2025-09-13 01:15:40.722592 | orchestrator | 2025-09-13 01:15:40 - testbed-node-5 2025-09-13 01:15:40.817255 | orchestrator | 2025-09-13 01:15:40 - testbed-node-2 2025-09-13 01:15:40.916087 | orchestrator | 2025-09-13 01:15:40 - testbed-node-4 2025-09-13 01:15:41.008179 | orchestrator | 2025-09-13 01:15:41 - testbed-node-1 2025-09-13 01:15:41.104744 | orchestrator | 2025-09-13 01:15:41 - clean up keypairs 2025-09-13 01:15:41.124584 | orchestrator | 2025-09-13 01:15:41 - testbed 2025-09-13 01:15:41.150177 | orchestrator | 2025-09-13 01:15:41 - wait for servers to be gone 2025-09-13 01:15:52.073984 | orchestrator | 2025-09-13 01:15:52 - clean up ports 2025-09-13 01:15:52.271967 | orchestrator | 2025-09-13 01:15:52 - 1d5c51e3-f9df-41f9-a19e-ebc9095ab8d7 2025-09-13 01:15:52.746074 | orchestrator | 2025-09-13 01:15:52 - 3cc761c5-87d8-4c8f-993c-b8cebfcedaaf 2025-09-13 01:15:52.989219 | orchestrator | 2025-09-13 01:15:52 - 42cdd83b-e09f-48be-8646-85c1f53bd945 2025-09-13 01:15:53.216169 | orchestrator | 2025-09-13 01:15:53 - 4f722c75-97dc-47a5-8a2a-cff033142743 2025-09-13 01:15:53.458098 | orchestrator | 2025-09-13 01:15:53 - 76419e2a-7bea-4ae0-92b5-0a528df8577c 2025-09-13 01:15:53.667362 | orchestrator | 2025-09-13 01:15:53 - d1a96899-55df-4018-a638-98fe027b08b5 2025-09-13 01:15:53.866344 | orchestrator | 2025-09-13 01:15:53 - d69cc32c-c14f-4b87-afb9-c04d75247827 2025-09-13 01:15:54.078439 | orchestrator | 2025-09-13 01:15:54 - clean up volumes 2025-09-13 01:15:54.214525 | orchestrator | 2025-09-13 01:15:54 - testbed-volume-2-node-base 2025-09-13 01:15:54.253883 | orchestrator | 2025-09-13 01:15:54 - testbed-volume-1-node-base 2025-09-13 01:15:54.294668 | orchestrator | 2025-09-13 01:15:54 - testbed-volume-4-node-base 2025-09-13 01:15:54.335446 | orchestrator | 2025-09-13 01:15:54 - testbed-volume-0-node-base 2025-09-13 01:15:54.373568 | orchestrator | 2025-09-13 01:15:54 - testbed-volume-3-node-base 2025-09-13 01:15:54.422866 | orchestrator | 2025-09-13 01:15:54 - testbed-volume-5-node-base 2025-09-13 01:15:54.467254 | orchestrator | 2025-09-13 01:15:54 - testbed-volume-manager-base 2025-09-13 01:15:54.513741 | orchestrator | 2025-09-13 01:15:54 - testbed-volume-4-node-4 2025-09-13 01:15:54.554867 | orchestrator | 2025-09-13 01:15:54 - testbed-volume-1-node-4 2025-09-13 01:15:54.598570 | orchestrator | 2025-09-13 01:15:54 - testbed-volume-0-node-3 2025-09-13 01:15:54.670266 | orchestrator | 2025-09-13 01:15:54 - testbed-volume-8-node-5 2025-09-13 01:15:54.721361 | orchestrator | 2025-09-13 01:15:54 - testbed-volume-5-node-5 2025-09-13 01:15:54.768415 | orchestrator | 2025-09-13 01:15:54 - testbed-volume-3-node-3 2025-09-13 01:15:54.812083 | orchestrator | 2025-09-13 01:15:54 - testbed-volume-7-node-4 2025-09-13 01:15:54.860395 | orchestrator | 2025-09-13 01:15:54 - testbed-volume-2-node-5 2025-09-13 01:15:55.060573 | orchestrator | 2025-09-13 01:15:55 - testbed-volume-6-node-3 2025-09-13 01:15:55.102898 | orchestrator | 2025-09-13 01:15:55 - disconnect routers 2025-09-13 01:15:55.304108 | orchestrator | 2025-09-13 01:15:55 - testbed 2025-09-13 01:15:56.275545 | orchestrator | 2025-09-13 01:15:56 - clean up subnets 2025-09-13 01:15:56.332040 | orchestrator | 2025-09-13 01:15:56 - subnet-testbed-management 2025-09-13 01:15:56.503160 | orchestrator | 2025-09-13 01:15:56 - clean up networks 2025-09-13 01:15:56.681077 | orchestrator | 2025-09-13 01:15:56 - net-testbed-management 2025-09-13 01:15:57.021746 | orchestrator | 2025-09-13 01:15:57 - clean up security groups 2025-09-13 01:15:57.057336 | orchestrator | 2025-09-13 01:15:57 - testbed-management 2025-09-13 01:15:57.179058 | orchestrator | 2025-09-13 01:15:57 - testbed-node 2025-09-13 01:15:57.292047 | orchestrator | 2025-09-13 01:15:57 - clean up floating ips 2025-09-13 01:15:57.329475 | orchestrator | 2025-09-13 01:15:57 - 81.163.192.209 2025-09-13 01:15:57.699607 | orchestrator | 2025-09-13 01:15:57 - clean up routers 2025-09-13 01:15:57.813603 | orchestrator | 2025-09-13 01:15:57 - testbed 2025-09-13 01:15:59.124668 | orchestrator | ok: Runtime: 0:00:19.422820 2025-09-13 01:15:59.130380 | 2025-09-13 01:15:59.130555 | PLAY RECAP 2025-09-13 01:15:59.130682 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-09-13 01:15:59.130745 | 2025-09-13 01:15:59.272564 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-13 01:15:59.274515 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-13 01:16:00.001669 | 2025-09-13 01:16:00.001825 | PLAY [Cleanup play] 2025-09-13 01:16:00.017884 | 2025-09-13 01:16:00.018028 | TASK [Set cloud fact (Zuul deployment)] 2025-09-13 01:16:00.076002 | orchestrator | ok 2025-09-13 01:16:00.091423 | 2025-09-13 01:16:00.091660 | TASK [Set cloud fact (local deployment)] 2025-09-13 01:16:00.128405 | orchestrator | skipping: Conditional result was False 2025-09-13 01:16:00.144377 | 2025-09-13 01:16:00.144513 | TASK [Clean the cloud environment] 2025-09-13 01:16:01.244900 | orchestrator | 2025-09-13 01:16:01 - clean up servers 2025-09-13 01:16:01.800008 | orchestrator | 2025-09-13 01:16:01 - clean up keypairs 2025-09-13 01:16:01.813156 | orchestrator | 2025-09-13 01:16:01 - wait for servers to be gone 2025-09-13 01:16:01.849929 | orchestrator | 2025-09-13 01:16:01 - clean up ports 2025-09-13 01:16:01.920742 | orchestrator | 2025-09-13 01:16:01 - clean up volumes 2025-09-13 01:16:01.978012 | orchestrator | 2025-09-13 01:16:01 - disconnect routers 2025-09-13 01:16:02.007224 | orchestrator | 2025-09-13 01:16:02 - clean up subnets 2025-09-13 01:16:02.029876 | orchestrator | 2025-09-13 01:16:02 - clean up networks 2025-09-13 01:16:02.217936 | orchestrator | 2025-09-13 01:16:02 - clean up security groups 2025-09-13 01:16:02.265555 | orchestrator | 2025-09-13 01:16:02 - clean up floating ips 2025-09-13 01:16:02.289597 | orchestrator | 2025-09-13 01:16:02 - clean up routers 2025-09-13 01:16:02.682366 | orchestrator | ok: Runtime: 0:00:01.417723 2025-09-13 01:16:02.686118 | 2025-09-13 01:16:02.686287 | PLAY RECAP 2025-09-13 01:16:02.686411 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-09-13 01:16:02.686481 | 2025-09-13 01:16:02.806228 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-13 01:16:02.810095 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-13 01:16:03.534167 | 2025-09-13 01:16:03.534325 | PLAY [Base post-fetch] 2025-09-13 01:16:03.549579 | 2025-09-13 01:16:03.549706 | TASK [fetch-output : Set log path for multiple nodes] 2025-09-13 01:16:03.605365 | orchestrator | skipping: Conditional result was False 2025-09-13 01:16:03.619942 | 2025-09-13 01:16:03.620162 | TASK [fetch-output : Set log path for single node] 2025-09-13 01:16:03.670960 | orchestrator | ok 2025-09-13 01:16:03.680127 | 2025-09-13 01:16:03.680265 | LOOP [fetch-output : Ensure local output dirs] 2025-09-13 01:16:04.158021 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/6adfe52b30654ba48ae13a9ef77a3415/work/logs" 2025-09-13 01:16:04.424915 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/6adfe52b30654ba48ae13a9ef77a3415/work/artifacts" 2025-09-13 01:16:04.698366 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/6adfe52b30654ba48ae13a9ef77a3415/work/docs" 2025-09-13 01:16:04.713385 | 2025-09-13 01:16:04.713515 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-09-13 01:16:05.675646 | orchestrator | changed: .d..t...... ./ 2025-09-13 01:16:05.675994 | orchestrator | changed: All items complete 2025-09-13 01:16:05.676052 | 2025-09-13 01:16:06.360956 | orchestrator | changed: .d..t...... ./ 2025-09-13 01:16:07.057931 | orchestrator | changed: .d..t...... ./ 2025-09-13 01:16:07.088461 | 2025-09-13 01:16:07.088619 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-09-13 01:16:07.132593 | orchestrator | skipping: Conditional result was False 2025-09-13 01:16:07.135180 | orchestrator | skipping: Conditional result was False 2025-09-13 01:16:07.156851 | 2025-09-13 01:16:07.156956 | PLAY RECAP 2025-09-13 01:16:07.157028 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-09-13 01:16:07.157083 | 2025-09-13 01:16:07.282177 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-13 01:16:07.285527 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-13 01:16:08.022346 | 2025-09-13 01:16:08.022493 | PLAY [Base post] 2025-09-13 01:16:08.036409 | 2025-09-13 01:16:08.036529 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-09-13 01:16:08.929216 | orchestrator | changed 2025-09-13 01:16:08.939669 | 2025-09-13 01:16:08.939790 | PLAY RECAP 2025-09-13 01:16:08.939873 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-09-13 01:16:08.939953 | 2025-09-13 01:16:09.052631 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-13 01:16:09.054767 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-09-13 01:16:09.848358 | 2025-09-13 01:16:09.848528 | PLAY [Base post-logs] 2025-09-13 01:16:09.859120 | 2025-09-13 01:16:09.859256 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-09-13 01:16:10.313987 | localhost | changed 2025-09-13 01:16:10.331576 | 2025-09-13 01:16:10.331785 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-09-13 01:16:10.369096 | localhost | ok 2025-09-13 01:16:10.373357 | 2025-09-13 01:16:10.373479 | TASK [Set zuul-log-path fact] 2025-09-13 01:16:10.400776 | localhost | ok 2025-09-13 01:16:10.414736 | 2025-09-13 01:16:10.414918 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-13 01:16:10.452573 | localhost | ok 2025-09-13 01:16:10.459536 | 2025-09-13 01:16:10.459710 | TASK [upload-logs : Create log directories] 2025-09-13 01:16:10.960023 | localhost | changed 2025-09-13 01:16:10.965787 | 2025-09-13 01:16:10.965969 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-09-13 01:16:11.458521 | localhost -> localhost | ok: Runtime: 0:00:00.006601 2025-09-13 01:16:11.462765 | 2025-09-13 01:16:11.462899 | TASK [upload-logs : Upload logs to log server] 2025-09-13 01:16:12.007137 | localhost | Output suppressed because no_log was given 2025-09-13 01:16:12.009651 | 2025-09-13 01:16:12.009769 | LOOP [upload-logs : Compress console log and json output] 2025-09-13 01:16:12.061274 | localhost | skipping: Conditional result was False 2025-09-13 01:16:12.065872 | localhost | skipping: Conditional result was False 2025-09-13 01:16:12.080234 | 2025-09-13 01:16:12.080461 | LOOP [upload-logs : Upload compressed console log and json output] 2025-09-13 01:16:12.125043 | localhost | skipping: Conditional result was False 2025-09-13 01:16:12.125639 | 2025-09-13 01:16:12.128959 | localhost | skipping: Conditional result was False 2025-09-13 01:16:12.142388 | 2025-09-13 01:16:12.142625 | LOOP [upload-logs : Upload console log and json output]